Neural Magic LogoNeural Magic Logo
Products
menu-icon
Products
DeepSparse EngineSparseMLSparseZoo
User Guides
Recipes
Enabling Pipelines

Enabling Pipelines to Work with SparseML Recipes

You can use recipes with common training pipelines to sparsify your custom model.

We currently support PyTorch, Keras, and TensorFlow. The pseudocode below will work for both sparse transfer learning and sparsifying from scratch, simply by passing the appropriate recipe.

See the SparseML installation page for installation requirements of each integration.

PyTorch Pipelines

The PyTorch sparsification libraries are located under the sparseml.pytorch.optim package. Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into PyTorch training pipelines.

First, the ScheduledModifierManager is created. This class accepts a recipe file and parses the hyperparameters at initialization. The modify() function wraps an optimizer or optimizer-like object (contains a step function) to override the step invocation. With this setup, the training process can be modified to sparsify the model.

To enable all of this, the integration code is accomplished by writing a handful of lines:

1from sparseml.pytorch.optim import ScheduledModifierManager
2
3## fill in definitions below
4model = Model() # model definition
5optimizer = Optimizer() # optimizer definition
6train_data = TrainData() # train data definition
7batch_size = BATCH_SIZE # training batch size
8steps_per_epoch = len(train_data) // batch_size
9
10manager = ScheduledModifierManager.from_yaml(PATH_TO_RECIPE)
11optimizer = manager.modify(model, optimizer, steps_per_epoch)
12
13# PyTorch training code
14
15manager.finalize(model)

Keras Pipelines

The Keras sparsification libraries are located under the sparseml.keras.optim package. Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into Keras training pipelines.

The integration is done using the ScheduledModifierManager class, which can be created from a recipe file. This class modifies the Keras objects for the desired algorithms using the modify method. The edited model, optimizer, and any callbacks necessary to modify the training process are returned. The model and optimizer can be used typically, and the callbacks must be passed into the fit or fit_generator function. If using train_on_batch, the callbacks must be invoked after each call. After training is completed, call into the manager's finalize method to clean up the graph for exporting.

To enable all of this, the integration code you'll need to write is only a handful of lines:

1from sparseml.keras.optim import ScheduledModifierManager
2
3## fill in definitions below
4model = None # your model definition
5optimizer = None # your optimizer definition
6num_train_batches = len(train_data) / batch_size # your number of batches per training epoch
7
8manager = ScheduledModifierManager.from_yaml("/PATH/TO/recipe.yaml")
9model, optimizer, callbacks = manager.modify(
10 model, optimizer, steps_per_epoch=num_train_batches
11)
12
13# Keras compilation and training code...
14# Be sure to compile the model after calling modify and pass the callbacks into the fit or fit_generator function.
15# Note, if you are using train_on_batch, then you will need to invoke the callbacks after every step.
16model.compile(...)
17model.fit(..., callbacks=callbacks)
18
19# finalize cleans up the graph for export
20save_model = manager.finalize(model)

TensorFlow V1 Pipelines

The TensorFlow sparsification libraries for TensorFlow version 1.X are located under the sparseml.tensorflow_v1.optim package. Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.

The integration is done using the ScheduledModifierManager class, which can be created from a recipe file. This class handles modifying the TensorFlow graph for the desired algorithms. With this setup, the training process can be modified to sparsify the model.

Estimator-Based Pipelines

It is simpler to integrate with estimator-based pipelines as compared to session-based pipelines. The ScheduledModifierManager can override the necessary callbacks in the estimator to modify the graph using the modify_estimator function.

1from sparseml.tensorflow_v1.optim import ScheduledModifierManager
2
3## fill in definitions below
4estimator = None # your estimator definition
5num_train_batches = len(train_data) / batch_size # your number of batches per training epoch
6
7manager = ScheduledModifierManager.from_yaml("/PATH/TO/config.yaml")
8manager.modify_estimator(estimator, steps_per_epoch=num_train_batches)
9
10# Normal estimator training code...

Session-Based Pipelines

Session-based pipelines need a little bit more compared to estimator-based pipelines; however, session-based pipelines are designed to require only a few lines of code for integration. After graph creation, the manager's create_ops method must be called. This will modify the graph as needed for the algorithms and return modifying ops and extras. After creating the session and training, call into session.run with the modifying ops after each step. Modifying extras contain objects such as TensorBoard summaries of the modifiers to be used, if desired. Finally, once completed, complete_graph must be called to remove the modifying ops for saving and exporting.

1from sparseml.tensorflow_v1.utils import tf_compat
2from sparseml.tensorflow_v1.optim import ScheduledModifierManager
3
4
5## fill in definitions below
6with tf_compat.Graph().as_default() as graph:
7 # Normal graph setup....
8 num_train_batches = len(train_data) / batch_size # your number of batches per training epoch
9
10 # Modifying graphs, be sure this is called after graph is created and before session is created
11 manager = ScheduledModifierManager.from_yaml("/PATH/TO/config.yaml")
12 mod_ops, mod_extras = manager.create_ops(steps_per_epoch=num_train_batches)
13
14 with tf_compat.Session() as sess:
15 # Normal training code...
16 # Call sess.run with the mod_ops after every batch update
17 sess.run(mod_ops)
18
19 # Call into complete_graph after training is done
20 manager.complete_graph()
Creating Sparsification Recipes
Exporting to the ONNX Format