DeepSparse is a CPU inference runtime that takes advantage of sparsity within neural networks to execute inference quickly. Coupled with SparseML, an open-source optimization library, DeepSparse enables you to achieve GPU-class performance on commodity hardware.
DeepSparse is available in two editions:
DeepSparse Community is available as a container image hosted on GitHub container registry.
1docker pull ghcr.io/neuralmagic/deepsparse:1.4.22docker tag ghcr.io/neuralmagic/deepsparse:1.4.2 deepsparse-docker3docker run -it deepsparse-docker
DeepSparse Community is also available via PyPI. We recommend using a virtual enviornment.
pip install deepsparse
DeepSparse includes three deployment APIs:
The example below downloads a 90% pruned-quantized BERT model for sentiment analysis in ONNX format from SparseZoo, compiles the model, and runs inference on randomly generated input.
1from deepsparse import Engine2from deepsparse.utils import generate_random_inputs, model_to_path34# download onnx, compile5zoo_stub = "zoo:nlp/sentiment_analysis/obert-base/pytorch/huggingface/sst2/pruned90_quant-none"6batch_size = 17compiled_model = Engine(model=zoo_stub, batch_size=batch_size)89# run inference (input is raw numpy tensors, output is raw scores)10inputs = generate_random_inputs(model_to_path(zoo_stub), batch_size)11output = compiled_model(inputs)12print(output)1314# > [array([[-0.3380675 , 0.09602544]], dtype=float32)] << raw scores
Pipeline is the default API for interacting with DeepSparse. Similar to Hugging Face Pipelines, DeepSparse Pipelines wrap Engine with pre- and post-processing (as well as other utilities), enabling you to send raw data to DeepSparse and receive the post-processed prediction.
The example below downloads a 90% pruned-quantized BERT model for sentiment analysis in ONNX format from SparseZoo, sets up a pipeline, and runs inference on sample data.
1from deepsparse import Pipeline23# download onnx, set up pipeline4zoo_stub = "zoo:nlp/sentiment_analysis/obert-base/pytorch/huggingface/sst2/pruned90_quant-none"5sentiment_analysis_pipeline = Pipeline.create(6 task="sentiment-analysis", # name of the task7 model_path=zoo_stub, # zoo stub or path to local onnx file8)910# run inference (input is a sentence, output is the prediction)11prediction = sentiment_analysis_pipeline("I love using DeepSparse Pipelines")12print(prediction)13# > labels=['positive'] scores=[0.9954759478569031]
Server wraps Pipelines with REST APIs, enabling you to stand up model serving endpoint running DeepSparse. This enables you to send raw data to DeepSparse over HTTP and receive the post-processed predictions.
DeepSparse Server is launched from the command line, configured via arguments or a server configuration file. The following downloads a 90% pruned-quantized BERT model for sentiment analysis in ONNX format from SparseZoo and launches a sentiment analysis endpoint:
1deepsparse.server \2 --task sentiment-analysis \3 --model_path zoo:nlp/sentiment_analysis/obert-base/pytorch/huggingface/sst2/pruned90_quant-none
Sending a request:
1import requests23url = "http://localhost:5543/predict" # Server's port default to 55434obj = {"sequences": "Snorlax loves my Tesla!"}56response = requests.post(url, json=obj)7print(response.text)8# {"labels":["positive"],"scores":[0.9965094327926636]}
DeepSparse accepts models in the ONNX format. ONNX models can be passed in one of two ways:
SparseZoo Stub: SparseZoo is an open-source repository of sparse models. The examples on this page use SparseZoo stubs to identify models and download them for deployment in DeepSparse.
Local ONNX File: Users can provide their own ONNX models, whether dense or sparse. For example:
wget https://github.com/onnx/models/raw/main/vision/classification/mobilenet/model/mobilenetv2-7.onnx
1from deepsparse import Engine2from deepsparse.utils import generate_random_inputs3onnx_filepath = "mobilenetv2-7.onnx"4batch_size = 1656# Generate random sample input7inputs = generate_random_inputs(onnx_filepath, batch_size)89# Compile and run10compiled_model = Engine(model=onnx_filepath, batch_size=batch_size)11outputs = compiled_model(inputs)12print(outputs[0].shape)13# (16, 1000) << batch, num_classes
DeepSparse offers different inference scenarios based on your use case.
Single-stream scheduling: the latency/synchronous scenario, requests execute serially. [default
]
It's highly optimized for minimum per-request latency, using all of the system's resources provided to it on every request it gets.
Multi-stream scheduling: the throughput/asynchronous scenario, requests execute in parallel.
The most common use cases for the multi-stream scheduler are where parallelism is low with respect to core count, and where requests need to be made asynchronously without time to batch them.
DeepSparse Community Edition gathers basic usage telemetry including, but not limited to, Invocations, Package, Version, and IP Address for Product Usage Analytics purposes. Review Neural Magic's Products Privacy Policy for further details on how we process this data.
To disable Product Usage Analytics, run the command:
export NM_DISABLE_ANALYTICS=True
Confirm that telemetry is shut off through info logs streamed with engine invocation by looking for the phrase "Skipping Neural Magic's latest package version check." For additional assistance, reach out through the DeepSparse GitHub Issue queue.
Contribute with code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.
For user help or questions about DeepSparse, sign up or log in to our Neural Magic Community Slack. We are growing the community member by member and happy to see you there. Bugs, feature requests, or additional questions can also be posted to our GitHub Issue Queue. You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by subscribing to the Neural Magic community.
For more general questions about Neural Magic, complete this form.
DeepSparse Community is licensed under the Neural Magic DeepSparse Community License. Some source code, example files, and scripts included in the deepsparse GitHub repository or directory are licensed under the Apache License Version 2.0 as noted.
DeepSparse Enterprise requires a Trial License or can be fully licensed for production, commercial applications.
Find this project useful in your research or other communications? Please consider citing:
1@InProceedings{2 pmlr-v119-kurtz20a,3 title = {Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks},4 author = {Kurtz, Mark and Kopinsky, Justin and Gelashvili, Rati and Matveev, Alexander and Carr, John and Goin, Michael and Leiserson, William and Moore, Sage and Nell, Bill and Shavit, Nir and Alistarh, Dan},5 booktitle = {Proceedings of the 37th International Conference on Machine Learning},6 pages = {5533--5543},7 year = {2020},8 editor = {Hal Daumé III and Aarti Singh},9 volume = {119},10 series = {Proceedings of Machine Learning Research},11 address = {Virtual},12 month = {13--18 Jul},13 publisher = {PMLR},14 pdf = {http://proceedings.mlr.press/v119/kurtz20a/kurtz20a.pdf},15 url = {http://proceedings.mlr.press/v119/kurtz20a.html}16}1718@article{DBLP:journals/corr/abs-2111-13445,19 author = {Eugenia Iofinova and20 Alexandra Peste and21 Mark Kurtz and22 Dan Alistarh},23 title = {How Well Do Sparse Imagenet Models Transfer?},24 journal = {CoRR},25 volume = {abs/2111.13445},26 year = {2021},27 url = {https://arxiv.org/abs/2111.13445},28 eprinttype = {arXiv},29 eprint = {2111.13445},30 timestamp = {Wed, 01 Dec 2021 15:16:43 +0100},31 biburl = {https://dblp.org/rec/journals/corr/abs-2111-13445.bib},32 bibsource = {dblp computer science bibliography, https://dblp.org}33}