Neural Magic LogoNeural Magic Logo
Products
menu-icon
Products
DeepSparse EngineSparseMLSparseZoo
Use Cases
Object Detection
Sparsifying

Sparsifying Object Detection Models with Ultralytics YOLOv5 and SparseML

This page explains how to create a sparse object detection model.

SparseML is integrated with the ultralytics/yolov5 repository to enable simple creation of sparse YOLOv5 and YOLOv5-P6 models. After training, the model can be deployed with Neural Magic's DeepSparse Engine. The engine enables inference with GPU-class performance directly on your CPU.

This integration enables you to create a sparse model in two ways:

  • Sparsification of YOLOv5 Models - easily sparsify any of the YOLOV5 and YOLOV5-P6 models, from YOLOv5n to YOLOv5x models.
  • Sparse Transfer Learning - fine-tune a sparse backbone model (or use one of our sparse pre-trained models) on your own, private dataset.

Each option is useful in different situations:

  • Sparsification from Scratch enables you to create a sparse version of any model (even those not in the SparseZoo), but requires hand-tuning the hyperparameters of the Sparsification algorithm.
  • Sparse Transfer Learning is the easiest path to creating a sparse model trained on your data. Simply pull a pre-sparsified model and transfer learning recipe from the SparseZoo and fine-tune on your data with a single command.

Installation Requirements

This section requires SparseML Torchvision Install.

Note: YOLOv5 will not immediately install with this command. Instead, a sparsification-compatible version of YOLOv5 will install on the first invocation of the YOLOv5 code in SparseML.

Tutorials

Getting Started

Sparsifying YOLOv5

In the example below, a dense YOLOv5s model pre-trained on COCO is sparsified and fine-tuned further COCO.

1sparseml.yolov5.train \
2 --weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none \
3 --data coco.yaml \
4 --hyp data/hyps/hyp.scratch.yaml \
5 --recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned_quant-aggressive_94
  • --weights argument indicates which model to start the pruning process from. It can be a SparseZoo stub or a local path to a model.
  • --data specifies the dataset to be used. You may sparsify your model while training on your own, private (downstream) dataset or while continuing training with the original (upstream) dataset. The configuration file for COCO is included in the yolov5 integration and can be used as an example for a custom dataset.
  • --recipe encodes the hyperparameters of the pruning process. It can be a SparseZoo stub or a local YAML file. See here for a detailed discussion of recipes.

Sparse Transfer Learning

SparseML also enables you to fine-tune a pre-sparsified model onto your own dataset. While you are free to use your backbone, we encourage you to leverage one of our sparse pre-trained models to boost your productivity!

The command below fetches a pre-sparsified YOLOv5s model, trained on the COCO dataset. It then fine-tunes the model to the VOC dataset while maintaining sparsity.

1sparseml.yolov5.train \
2 --data VOC.yaml \
3 --cfg models_v5.0/yolov5s.yaml \
4 --weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned_quant-aggressive_94?recipe_type=transfer \
5 --hyp data/hyps/hyp.finetune.yaml \
6 --recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned-aggressive_96

SparseML CLI

The SparseML installation provides a CLI for running YOLOv5 scripts with SparseML capability. The full set of commands is included below:

1sparseml.yolov5.train
2sparseml.yolov5.validation
3sparseml.yolov5.export_onnx
4sparseml.yolov5.val_onnx

Appending the --help argument displays a full list of options for the command:

sparseml.yolov5.train --help

output:

1usage: sparseml.yolov5.train [-h] [--weights WEIGHTS] [--cfg CFG] [--data DATA] [--hyp HYP] [--epochs EPOCHS] [--batch-size BATCH_SIZE] [--imgsz IMGSZ] [--rect]
2 [--resume [RESUME]] [--nosave] [--noval] [--noautoanchor] [--evolve [EVOLVE]] [--bucket BUCKET] [--cache [CACHE]] [--image-weights]
3 [--device DEVICE] [--multi-scale] [--single-cls] [--optimizer {SGD,Adam,AdamW}] [--sync-bn] [--workers WORKERS] [--project PROJECT]
4 [--name NAME] [--exist-ok] [--quad] [--cos-lr] [--label-smoothing LABEL_SMOOTHING] [--patience PATIENCE] [--freeze FREEZE [FREEZE ...]]
5 [--save-period SAVE_PERIOD] [--local_rank LOCAL_RANK] [--entity ENTITY] [--upload_dataset [UPLOAD_DATASET]]
6 [--bbox_interval BBOX_INTERVAL] [--artifact_alias ARTIFACT_ALIAS] [--recipe RECIPE] [--disable-ema] [--max-train-steps MAX_TRAIN_STEPS]
7 [--max-eval-steps MAX_EVAL_STEPS] [--one-shot] [--num-export-samples NUM_EXPORT_SAMPLES]
8
9optional arguments:
10 -h, --help show this help message and exit
11 --weights WEIGHTS initial weights path
12 --cfg CFG model.yaml path
13 --data DATA dataset.yaml path
14 --hyp HYP hyperparameters path
15 --epochs EPOCHS
16 --batch-size BATCH_SIZE
17 total batch size for all GPUs, -1 for autobatch
18...

Exporing to ONNX

Exporting the Sparse Model to ONNX

The DeepSparse Engine accepts ONNX formats and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.

The SparseML installation provides a sparseml.yolov5.export_onnx command that you can use to load the training model folder and create a new model.onnx file within. The export process is modified such that the quantized and pruned models are corrected and folded properly. Be sure the --weights argument points to your trained model.

1sparseml.yolov5.export_onnx \
2 --weights path/to/weights.pt \
3 --dynamic
Image Classification Deployments with DeepSparse
Object Detection Deployments with DeepSparse