Sparse transfer learning is the easiest pathway for creating a sparse model fine-tuned on your datasets.
Sparse transfer learning works by taking a sparse model pre-trained on a large dataset and fine-tuning it onto a smaller downstream dataset. SparseZoo and SparseML work together to accomplish this goal:
By fine-tuning pre-sparsified models onto your dataset, you can avoid the time, money, and hyperparameter tuning involved with sparsifying a dense model from scratch. Once trained, deploy your model with DeepSparse for GPU-level performance on CPUs.
The examples below walk through use cases leveraging SparseML for sparse transfer learning.
More documentation, models, use cases, and examples are continually being added. If you don't see one you're interested in, search the DeepSparse GitHub repo, the SparseML GitHub repo, the SparseZoo website, or ask in the Neural Magic Slack.