ResNet-50: Sparsifying to Improve Image Classification Performance

Neural Magic creates models and recipes that allow anyone to plug in their data and leverage SparseML’s recipe-driven approach on top of robust training pipelines for the popular ResNet-50 image classification network. Sparsifying involves removing redundant information from neural networks using algorithms such as pruning and quantization, among others. This sparsification process results in faster inference and smaller file sizes for deployments.

This page walks through the following use cases for trying out the sparsified ResNet-50 models:

  • Compare the differences between the models for both accuracy and inference performance

  • Run the models for inference in deployment or applications

  • Train the models on new datasets

ResNet-50 Batch Size 64 Performance Comparisons
ResNet-50 v1 | Batch = 64 | AWS c5.12xlarge CPU

Sparse Models

Card not loading? View card here.