Neural Magic LogoNeural Magic Logo
Products
menu-icon
Products
DeepSparseSparseMLSparseZoo
Get Started
Sparsify a Model
Custom Integrations

Creating a Custom Integration for Sparsifying Models

This page explains how to apply a recipe to a custom model. For more details on the concepts of pruning/quantization as well as how to create recipes, see Sparsifying a Model for SparseML Integrations.

In addition to supported integrations described on the prior page, SparseML is set to enable easy integration in custom training pipelines. This flexibility enables easy sparsification for any neural network architecture for custom models and use cases. Once SparseML is installed, the necessary code can be plugged into most PyTorch/Keras training pipelines with only a few lines of code.

Install Requirements

This section requires SparseML Torchvision Install to run the Apply the Recipe section.

Integrate SparseML

To enable sparsification of models with recipes, a few edits to the training pipeline code need to be made. Specifically, a ScheduledModifierManager instance is used to take over and inject the desired sparsification algorithms into the training process. To do this properly in PyTorch, the ScheduledModifierManager requires the instance of the model to modify, the optimizer used for training, and the number of steps_per_epoch to ensure algorithms are applied at the right time.

For the integration, the following code illustrates all that is needed:

1from sparseml.pytorch.optim import ScheduledModifierManager
2manager = ScheduledModifierManager.from_yaml(recipe_path)
3optimizer = manager.modify(model, optimizer, steps_per_epoch)
4
5# your typical training loop, using model/optimizer as usual
6
7manager.finalize(model)

Walking through this code:

  1. The ScheduledModifierManager is imported from the SparseML Python package.
  2. An instance of the ScheduledModifierManager is created from a recipe stored as a local file or on the SparseZoo.
  3. The optimizer and model are modified by ScheduledModifierManager so that the recipe will be applied while training. A wrapped instance of the training optimizer is returned.
  4. After training has been completed, a finalize call is invoked on the ScheduledModifierManager to release all resources.

A simple training example utilizing PyTorch and Torchvision with this SparseML integration is provided below:

1import torch
2from torch.nn import Linear
3from torch.utils.data import DataLoader
4from torch.nn import CrossEntropyLoss
5from torch.optim import SGD
6
7from sparseml.pytorch.models import resnet50
8from sparseml.pytorch.datasets import ImagenetteDataset, ImagenetteSize
9from sparseml.pytorch.optim import ScheduledModifierManager
10
11# Model creation
12NUM_CLASSES = 10 # number of Imagenette classes
13model = resnet50(pretrained=True, num_classes=NUM_CLASSES)
14
15# Dataset creation
16batch_size = 64
17train_dataset = ImagenetteDataset(train=True, dataset_size=ImagenetteSize.s320, image_size=224)
18train_loader = DataLoader(train_dataset, batch_size, shuffle=True, pin_memory=True, num_workers=8)
19
20# Device setup
21device = "cuda" if torch.cuda.is_available() else "cpu"
22model.to(device)
23
24# Loss setup
25criterion = CrossEntropyLoss()
26optimizer = SGD(model.parameters(), lr=10e-6, momentum=0.9)
27
28# Recipe - in this case, we pull down a recipe from the SparseZoo for ResNet-50
29# This can be a be a path to a local file
30recipe_path = "zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_quant-none?recipe_type=original"
31
32# SparseML Integration
33manager = ScheduledModifierManager.from_yaml(recipe_path)
34optimizer = manager.modify(model, optimizer, steps_per_epoch=len(train_loader))
35
36# Training Loop
37for epoch in range(manager.max_epochs):
38 running_loss = 0.0
39 running_corrects = 0.0
40 for inputs, labels in train_loader:
41 inputs = inputs.to(device)
42 labels = labels.to(device)
43 optimizer.zero_grad()
44 with torch.set_grad_enabled(True):
45 outputs, _ = model(inputs)
46 loss = criterion(outputs, labels)
47 _, preds = torch.max(outputs, 1)
48 loss.backward()
49 optimizer.step()
50 running_loss += loss.item() * inputs.size(0)
51 running_corrects += torch.sum(preds == labels.data)
52
53 epoch_loss = running_loss / len(train_loader.dataset)
54 epoch_acc = running_corrects.double() / len(train_loader.dataset)
55 print("Training Loss: {:.4f} Acc: {:.4f}".format(epoch_loss, epoch_acc))
56
57manager.finalize(model)

Create a Recipe

To dive into the details of this recipe and how to edit it, visit Supported Integrations. The resulting recipe is included here for easy integration and testing.

1modifiers:
2 - !GlobalMagnitudePruningModifier
3 init_sparsity: 0.05
4 final_sparsity: 0.8
5 start_epoch: 0.0
6 end_epoch: 30.0
7 update_frequency: 1.0
8 params: __ALL_PRUNABLE__
9
10 - !SetLearningRateModifier
11 start_epoch: 0.0
12 learning_rate: 0.05
13
14 - !LearningRateFunctionModifier
15 start_epoch: 30.0
16 end_epoch: 50.0
17 lr_func: cosine
18 init_lr: 0.05
19 final_lr: 0.001
20
21 - !QuantizationModifier
22 start_epoch: 50.0
23 freeze_bn_stats_epoch: 53.0
24
25 - !SetLearningRateModifier
26 start_epoch: 50.0
27 learning_rate: 10e-6
28
29 - !EpochRangeModifier
30 start_epoch: 0.0
31 end_epoch: 55.0

Sparsify a Model

The pipeline is ready to sparsify a model with the integration and recipe setup. To begin sparsifying, save the recipe as a local file called recipe.yaml. Next, pass in the path to the recipe to the training script for the recipe_path argument for the ScheduledModifierManager.from_yaml(recipe_path) line. With that completed, start the training pipeline, and the result will be a sparsified model.

Sparsifying a Model for SparseML Integrations
Deploy a Model