🚨 Note: The current Docs site is outdated. Neural Magic's 1.7 release slated for January 2024 will include a Docs refresh. Meanwhile, please consult our GitHub repositories for the content:   DeepSparse,   SparseML,   SparseZoo.
Neural Magic LogoNeural Magic Logo
User Guides

What are Sparsification Recipes?

Sparsification recipes are YAML or Markdown files that encode the instructions for how to sparsify or sparse transfer learn a model. These instructions include the sparsification algorithms to apply along with any hyperparameters. Recipes work with the SparseML library to easily apply sparse transfer learning or sparsification algorithms to any neural network and training pipeline.

All SparseML sparsification APIs are designed to work with recipes. The files encode the instructions needed for modifying the model and training process as a list of modifiers. Example modifiers can be anything from setting the learning rate for the optimizer to gradual magnitude pruning. The rest of the SparseML system is coded to parse the recipe files into a native format for the desired framework and apply the modifications to the model and training pipeline.

The easiest ways to get or create recipes are by using the pre-configured recipes in SparseZoo or using Sparsify's automatic creation. Especially for users performing sparse transfer learning from our pre-sparsified models in the SparseZoo, we highly recommend using the pre-made transfer learning recipes found on SparseZoo. However, power users may be inclined to create their recipes to enable more fine-grained control or add custom modifiers when sparsifying a new model from scratch.

Follow the links below for more detail on how to create and use recipes.


What is Sparsification?
Creating Sparsification Recipes