Neural Magic LogoNeural Magic Logo
Get Started
Transfer a Sparsified Model

Transfer a Sparsified Model

Sparse transfer learning is the easiest pathway for creating a sparse model fine-tuned on your datasets.

Sparse transfer learning works by taking a sparse model pre-trained on a large dataset and fine-tuning it onto a smaller downstream dataset. SparseZoo and SparseML work together to accomplish this goal:

  • SparseZoo is a growing repository of sparse models pre-trained on large datasets ready for fine-tuning
  • SparseML contains convenient training CLIs that run transfer-learn while preserving the same level of sparsity as the starting model

By fine-tuning pre-sparsified models onto your dataset, you can avoid the time, money, and hyperparameter tuning involved with sparsifying a dense model from scratch. Once trained, deploy your model with DeepSparse for GPU-level performance on CPUs.

Use Case Examples

The examples below walk through use cases leveraging SparseML for sparse transfer learning.

Other Use Cases

More documentation, models, use cases, and examples are continually being added. If you don't see one you're interested in, search the DeepSparse GitHub repo, the SparseML GitHub repo, the SparseZoo website, or ask in the Neural Magic Slack.

Use a Custom Use Case
Transfer a Sparsified Model for Text Classification