This guide walks you through training Ultralytics models directly within the X-AnyLabeling GUI. You can prepare your dataset, configure training parameters, launch a training job, and monitor its progress—all from one convenient interface.
UltralyticsTrainingPlatforms.mp4
- OS: Windows/Linux/macOS
- Python: 3.9 -- 3.12
- Create and activate a virtual environment
Using conda is recommended to manage your dependencies.
conda create -n x-anylabeling-yolo python=3.12 -y
conda activate x-anylabeling-yolo- Install Ultralytics
For the fastest installation, we recommend using uv. It can also automatically detect your CUDA version to install the correct PyTorch build.
pip install --upgrade uv
uv pip install ultralytics --torch-backend=autoNote: To select a specific CUDA backend (e.g.,
cu121), set--torch-backend=cu121. See theuvPyTorch integration guide for more details.
License Notice: Ultralytics is licensed under AGPL-3.0. If you use the training feature and provide it as a network service, you must comply with AGPL-3.0's source code disclosure requirements (Section 13). X-AnyLabeling itself remains under GPL-3.0 and does not require Ultralytics for its core annotation functionality.
- Clone the repository and install dependencies
git clone https://github.com/CVHub520/X-AnyLabeling.git
cd X-AnyLabelingChoose the requirements file that matches your system and needs:
requirements-gpu-dev.txt: For development with GPU support.requirements-gpu.txt: For running with GPU support.requirements-dev.txt: For CPU-only development.requirements.txt: For CPU-only execution.requirements-macos.txt: For running wit MPS support.requirements-macos-dev.txt: For MPS-only development.
Install the required packages using uv:
# Replace [suffix] with your choice, e.g., gpu-dev
uv pip install -r requirements-[suffix].txtNote
(macOS only) Install PyQt dependencies from conda-forge:
conda install -c conda-forge pyqt==5.15.9 pyqtwebengineTo launch the GUI, run the following command from the repository root:
python anylabeling/app.pyFor more details, please refer to get_started.md
Once your images are loaded and labeled in the application, you can start the training process by navigating to the main menu and selecting Train -> Ultralytics.
This tab provides a summary of your current dataset. Your first step is to configure the dataset for training.
- Task Type: Select the type of model you want to train: Classify, Detect, OBB, Segment, or Pose.
- Dataset Summary: Review the class distribution and ensure you have a sufficient number of labels (a count of 20+ is required). If your dataset isn't loaded, you can do so here.
When your data is correctly configured, click Next.
Here, you'll set up the core training parameters and hyperparameters.
- Project and Name: These fields define the output directory for your training run, which will be saved to
<Project>/<Name>. The project path is set automatically based on the task type. - Model: Path to a pretrained model checkpoint (
.ptfile) to use as a starting point. - Data: Path to your dataset's configuration file (
.yaml) for det/seg/obb/pose tasks, or dataset directory for classification tasks. For classification one, you can leave this blank, as the tool will automatically generate one based onflagfiled for you. For details on the format, see the official Ultralytics documentation. - Device: Automatically detects available hardware (
CPU,CUDA,MPS). Select your desired training device. - Dataset Ratio: A slider to set the train/validation split for your dataset.
- Only Checked Files: In Advanced Settings > Checkpoint and Validation, enable this option to train only with files that have been marked as checked in X-AnyLabeling.
![NOTE] For
Poseestimation tasks, an additional field will appear to specify a keypoint configuration YAML file. See the example file for the required format.
After v2.3.4+, X-AnyLabeling has support for Classify tasks and supports two data preparation modes.
-
Flags-based Classification: Use X-AnyLabeling's built-in image classification flags. For quick annotation tutorials, refer to official guide and example (EN | ZH)
-
Pre-organized Dataset: If you already have a local structured classification dataset, set the Data field to dataset directory on
Configuration Tab. The expected structure is:
dataset/
├── train/
│ ├── class1/
│ │ ├── image1.jpg
│ │ └── image2.jpg
│ └── class2/
│ ├── image3.jpg
│ └── image4.jpg
├── val/ (optional)
│ ├── class1/
│ └── class2/
└── test/ (optional)
├── class1/
└── class2/
Note
When using the last mode, the tool will directly use your organized dataset without additional processing.
This section contains common hyperparameters like epochs, batch size, and image size. For more advanced options, expand the Advanced Settings dropdown.
For detailed explanations of each parameter, please refer to the Ultralytics documentation on training.
Tip
You can save your current settings as a JSON file or import a previous configuration. When you're finished, click Next to proceed to the training screen.
This is the main dashboard for monitoring your training job.
The interface is split into four parts:
- Training Status: Shows the current state (
Training,Completed,Error) and a real-time progress bar. - Training Logs: A live feed of the console output from the training process.
- Training Images: Displays key visualizations, such as training batches and validation metrics (e.g., PR curves). Click an image to view it larger.
- Actions:
Open Directory: Opens the output folder for the current training run.Previous: Returns to the configuration tab.Start Training: Begins the training process. This button will change toStop Trainingwhile a job is running.Export: After a successful run, this button appears, allowing you to export the best model checkpoint to formats like ONNX.
After your model is trained and exported, you can load it as a custom model in X-AnyLabeling to perform inference and continue improving your dataset.


