Training and evaluation
Training
The main function to train a yolo model can be accessed either through the API or on CLI. To start a training, you can launch the following command:
yolo train classify|detect|obb|segment model=<path_to_model.pt> ...
Here is an explanation of the different key parameters:
| Parameter | Description | Type | Default |
|---|---|---|---|
|
Specifies the model file for training. Accepts a path to either a |
|
|
|
Path to the |
|
|
|
Total epochs to train. on. |
|
|
|
Early stopping based on training time. |
|
|
|
Early stopping based on consecutive epochs without improvement on validation loss. |
|
|
|
Batch size. |
|
|
|
Image size (in pixels) to resize the training images. |
|
|
|
On which GPU to run training. You can set multiple GPU usage with |
|
|
|
Useful to keep track of exactly where your training logs are created. By default, will create a |
|
|
|
Name of run folder in the log folder. |
|
|
|
If you want to train a class agnostic detector and do not care anymore of the distinction between classes. |
|
|
|
Distribution focal loss: weight of one of the losses of the Total loss. |
|
|
More details:
-
Using
batch size=-1will automatically use a batch size for 60% of GPU Memory usage. It can be a fraction (ex: 0.70) to use a fraction of GPU memory. -
The resizing conserves aspect ratio: the longest side is set to the target size then padding is added (grey padding) to complete the square image. Must be a multiple of 32 or 64.
-
By increasing the dlf parameter, one can augment the importance given to classes on which the model struggles the most to classify correctly. Useful for underrepresented or easily confused classes.
-
For more parameters, please refer to the Ultralytics training documentation.
Evaluation
To start an evaluation, one can launch the following command:
yolo val model=<path_to_model.pt> ...
-
If only a model is given in the command, the model outputs the internal evaluation done on training and validation data stored in the model checkpoint.
-
The evaluation function outputs confusion matrices and images of some batches on which it ran its evaluation to qualitatively evaluate the model.
-
To have more flexibility in the evaluation, it can be interesting to build dedicated functions to evaluate with your specific metrics.
-
The
predict()function in python outputs the results as a dictionary. -
There are libraries, like fiftyone for example, that can take a trained model and evaluate on given data.