YOLOv3: Sparsifying to Improve Object Detection Performance
Neural Magic creates models and recipes that allow anyone to plug in their data and leverage SparseML’s recipe-driven approach on top of Ultralytics’ robust training pipelines for the popular YOLOv3 object detection network. Sparsifying involves removing redundant information from neural networks using algorithms such as pruning and quantization, among others. This sparsification process results in faster inference and smaller file sizes for deployments.
This page walks through the following use cases for trying out the sparsified YOLOv3 models:
Compare the differences between the models for both accuracy and inference performance
Run the models for inference in deployment or applications
Train the models on new datasets
Video not loading? View full video here.