This page walks through an example of fine-tuning a pre-sparsified model from the SparseZoo onto a new dataset for object detection.
We will use SparseZoo to pull down a pre-sparsified YOLOv5l and will use SparseML to fine-tune onto the VOC dataset while preserving sparsity.
This example requires SparseML Torchvision Install.
The SparseZoo contains several sparsified object detection models and transfer learning recipes, including YOLOv5l, which is used in this example. The SparseZoo stub is below:
sparseml.yolov5.train CLI command kicks off a run to fine-tune the sparsified YOLOv5 model onto the VOC dataset for object detection.
After the command completes, the trained model will reamin sparse, achieve a mAP@0.5 of around 0.80 on VOC, and will be stored in the local
$sparseml.yolov5.train \--weights zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned_quant-aggressive_95?recipe_type=transfer_learn \--recipe zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned_quant-aggressive_95?recipe_type=transfer_learn \--cfg models_v5.0/yolov5l.yaml \--hyp data/hyps/hyp.finetune.yaml \--data VOC.yaml \--project yolov5l \--name sparsified
The most important arguments are
--dataspecifies the dataset onto which the model will be fine-tuned
--weightsspecifies the base model used to start the transfer learning process (can be a SparseZoo stub or local custom model path)
--recipespecifies the hyperparameters of the fine-tuning process (can be a SparseZoo stub or a local custom recipe)
To utilize your own dataset, set up the appropriate image dataset structure and pass the path as the
An example for VOC is on GitHub.
--hyp are configuration files. You can checkout the examples on GitHub.
There are many additional command line arguments that can be passed to tweak your fine-tuning process. Run the following to see the full list of options:
$ sparseml.yolov5.train -h
With the sparsified model successfully trained, it is time to export it for inference.
sparseml.yolov5.export_onnx command is used to export the training graph to a performant inference one.
After the command completes, a
model.onnx file is created in
It is now ready for deployment with DeepSparse utilizing pipelines.
$sparseml.yolov5.export_onnx \--weights yolov5l/sparsified/weights/best.pt \--dynamic