deepsparse package

Submodules

deepsparse.benchmark module

Code related to benchmarking batched inference runs

class deepsparse.benchmark.BatchBenchmarkResult(time_start: float, time_end: float, batch_size: int, inputs: Union[None, List[numpy.ndarray]] = None, outputs: Union[None, List[numpy.ndarray], Dict[str, numpy.ndarray]] = None, extras: Optional[Any] = None)[source]

Bases: object

A benchmark result for a batched inference run

Parameters
  • time_start – The system time when the run for the batch was started

  • time_end – The system time when the run for the batch ended

  • batch_size – The size of the batch that was benchmarked

  • inputs – Optional batch inputs that were given for the run

  • outputs – Optional batch outputs that were given for the run

  • extras – Optional batch extras to store any other data for the run

property batch_size

The size of the batch that was benchmarked

Type

return

property batches_per_second

The number of batches that could be run in one second based on this result

Type

return

property extras

Batch extras to store any other data for the run

Type

return

property inputs

Batch inputs that were given for the run, if any

Type

return

property items_per_second

The number of items that could be run in one second based on this result

Type

return

property ms_per_batch

The number of milliseconds it took to run the batch

Type

return

property ms_per_item

The averaged number of milliseconds it took to run each item in the batch

Type

return

property outputs

Batch outputs that were given for the run, if any

Type

return

property time_elapsed

The time elapsed for the entire run (end - start)

Type

return

property time_end

The system time when the run for the batch ended

Type

return

property time_start

The system time when the run for the batch was started

Type

return

class deepsparse.benchmark.BenchmarkResults[source]

Bases: Iterable

The benchmark results for a list of batched inference runs

append_batch(time_start: float, time_end: float, batch_size: int, inputs: Union[None, List[numpy.ndarray]] = None, outputs: Union[None, List[numpy.ndarray], Dict[str, numpy.ndarray]] = None, extras: Optional[Any] = None)[source]

Add a recorded batch to the current results

Parameters
  • time_start – The system time when the run for the batch was started

  • time_end – The system time when the run for the batch ended

  • batch_size – The size of the batch that was benchmarked

  • inputs – Optional batch inputs that were given for the run

  • outputs – Optional batch outputs that were given for the run

  • extras – Optional batch extras to store any other data for the run

property batch_sizes

the list of all batch run sizes that have been added

Type

return

property batch_times

the list of all batch run times that have been added

Type

return

property batch_times_mean

the mean of all the batch run times that have been added

Type

return

property batch_times_median

the median of all the batch run times that have been added

Type

return

property batch_times_std

the standard deviation of all the batch run times that have been added

Type

return

property batches_per_second

The number of batches that could be run in one second based on this result

Type

return

property inputs

Batch inputs that were given for the run, if any

Type

return

property items_per_second

The number of items that could be run in one second based on this result

Type

return

property ms_per_batch

The number of milliseconds it took to run the batch

Type

return

property ms_per_item

The averaged number of milliseconds it took to run each item in the batch

Type

return

property num_batches

the number of batches that have been added

Type

return

property num_items

the number of items across all batches that have been added

Type

return

property outputs

Batch outputs that were given for the run, if any

Type

return

property results

the list of recorded batch results

Type

return

deepsparse.cpu module

Functionality for detecting the details of the currently available cpu

class deepsparse.cpu.architecture(*args, **kwargs)[source]

Bases: dict

A class containing all the architecture details for the current CPU.

Members include (but are not limited to):

vendor - a string name of vendor) isa - a string containing avx2, avx512 or unknown) vnni - a boolean indicating VNNI support) num_sockets - integer number of physical sockets available_sockets - integer number of sockets available for use cores_per_socket - integer number of physical cores per socket available_cores_per_socket - integer number of available cores per socket threads_per_core - integer physical number of threads per core available_threads_per_core - integer available number of threads per core L1_instruction_cache_size - L1 instruction cache size in bytes L1_data_cache_size - L1 data cache size in bytes L2_cache_size - L2 cache size in bytes L3_cache_size - L3 cache size in bytes

property num_physical_cores

the total number of cores available on the current machine

Type

return

property num_threads

the total number of hyperthreads available on the current machine

Type

return

override_isa(value: str)[source]

Set the isa to the desired value.

Parameters

value – the value to update the isa to

property threads_per_socket

the number of hyperthreads available per socket on the current machine

Type

return

deepsparse.cpu.cpu_architecture()deepsparse.cpu.architecture[source]

Detect the CPU details on linux systems If any other OS is used, an exception will be raised.

Specifically:
  • the number of physical cores available per socket on the system

  • detects the vector instruction set available (avx2, avx512)

  • if vnni is available

NM_ARCH environment variable can be used to override the avx instruction set detection

Returns

an instance of the architecture class

deepsparse.cpu.cpu_avx2_compatible()bool[source]
Returns

True if the current cpu has the AVX2 or AVX512 instruction sets, used for running neural networks performantly (if AVX2 only then less performant compared to strictly AVX512)

deepsparse.cpu.cpu_avx512_compatible()bool[source]
Returns

True if the current cpu has the AVX512 instruction set, used for running neural networks performantly

deepsparse.cpu.cpu_details()Tuple[int, str, bool][source]

Detect the CPU details on linux systems If any other OS is used, will raise an exception

Specifically:
  • the number of physical cores available per socket on the system

  • detects the vector instruction set available (avx2, avx512)

  • if vnni is available

NM_ARCH environment variable can be used to override the avx instruction set detection

Returns

a tuple containing the detected cpu information (number of physical cores per socket, avx instruction set, vnni support)

deepsparse.cpu.cpu_vnni_compatible()bool[source]
Returns

True if the current cpu has the VNNI instruction set, used for running int8 quantized networks performantly.

deepsparse.cpu.print_hardware_capability()[source]

Print out the detected CPU’s hardware capability and general support for model performance within the DeepSparse Engine.

deepsparse.engine module

Code related to interfacing with a Neural Network in the DeepSparse Engine using python

class deepsparse.engine.Engine(model: Union[str, sparsezoo.objects.model.Model, sparsezoo.objects.file.File], batch_size: int, num_cores: int, num_sockets: Optional[int] = None, scheduler: Optional[deepsparse.engine.Scheduler] = None)[source]

Bases: object

Create a new DeepSparse Engine that compiles the given onnx file for GPU class performance on commodity CPUs.

Note 1: Engines are compiled for a specific batch size and for a specific number of CPU cores.

Example:
# create an engine for batch size 1 on all available cores
engine = Engine(“path/to/onnx”, batch_size=1, num_cores=None)
Parameters
  • model – Either a path to the model’s onnx file, a SparseZoo model stub prefixed by ‘zoo:’, a SparseZoo Model object, or a SparseZoo ONNX File object that defines the neural network

  • batch_size – The batch size of the inputs to be used with the engine

  • num_cores – The number of physical cores to run the model on. If more cores are requested than are available on a single socket, the engine will try to distribute them evenly across as few sockets as possible.

  • num_sockets – This parameter no longer functions and will be removed in a future version.

  • scheduler – The kind of scheduler to execute with. Pass None for the default.

property batch_size

The batch size of the inputs to be used with the model

Type

return

benchmark(inp: List[numpy.ndarray], num_iterations: int = 20, num_warmup_iterations: int = 5, include_inputs: bool = False, include_outputs: bool = False, show_progress: bool = False)deepsparse.benchmark.BenchmarkResults[source]

A convenience function for quickly benchmarking the instantiated model on a given input in the DeepSparse Engine. After executing, will return the summary statistics for benchmarking.

Parameters
  • inp – The list of inputs to pass to the engine for benchmarking. The expected order is the inputs order as defined in the ONNX graph.

  • num_iterations – The number of iterations to run benchmarking for. Default is 20

  • num_warmup_iterations – T number of iterations to warm up engine before benchmarking. These executions will not be counted in the benchmark results that are returned. Useful and recommended to bring the system to a steady state. Default is 5

  • include_inputs – If True, inputs from forward passes during benchmarking will be added to the results. Default is False

  • include_outputs – If True, outputs from forward passes during benchmarking will be added to the results. Default is False

  • show_progress – If True, will display a progress bar. Default is False

Returns

the results of benchmarking

benchmark_loader(loader: Iterable[List[numpy.ndarray]], num_iterations: int = 20, num_warmup_iterations: int = 5, include_inputs: bool = False, include_outputs: bool = False, show_progress: bool = False)deepsparse.benchmark.BenchmarkResults[source]

A convenience function for quickly benchmarking the instantiated model on a give DataLoader in the DeepSparse Engine. After executing, will return the summary statistics for benchmarking.

Parameters
  • loader – An iterator of inputs to pass to the engine for benchmarking. The expected order of each input is as defined in the ONNX graph.

  • num_iterations – The number of iterations to run benchmarking for. Default is 20

  • num_warmup_iterations – T number of iterations to warm up engine before benchmarking. These executions will not be counted in the benchmark results that are returned. Useful and recommended to bring the system to a steady state. Default is 5

  • include_inputs – If True, inputs from forward passes during benchmarking will be added to the results. Default is False

  • include_outputs – If True, outputs from forward passes during benchmarking will be added to the results. Default is False

  • show_progress – If True, will display a progress bar. Default is False

Returns

the results of benchmarking

property cpu_avx_type

The detected cpu avx type that neural magic is running with. One of {avx2, avx512}. AVX instructions give significant execution speedup with avx512 > avx2.

Type

return

property cpu_vnni

True if vnni support was detected on the cpu, False otherwise. VNNI gives performance benefits for quantized networks.

Type

return

mapped_run(inp: List[numpy.ndarray], val_inp: bool = True)Dict[str, numpy.ndarray][source]

Run given inputs through the model for inference. Returns the result as a dictionary of numpy arrays corresponding to the output names of the model as defined in the ONNX graph.

Note 1: this function can add some a performance hit in certain cases. If using, please validate that you do not incur a performance hit by comparing with the regular run func

See @run for more details on specific setup for the inputs.

Example:
engine = Engine(“path/to/onnx”, batch_size=1)
inp = [numpy.random.rand(1, 3, 224, 224).astype(numpy.float32)]
out = engine.mapped_run(inp)
assert isinstance(out, Dict)
Parameters
  • inp – The list of inputs to pass to the engine for inference. The expected order is the inputs order as defined in the ONNX graph.

  • val_inp – Validate the input to the model to ensure numpy array inputs are setup correctly for the DeepSparse Engine

Returns

The dictionary of outputs from the model after executing over the inputs

property model_path

The local path to the model file the current instance was compiled from

Type

return

property num_cores

The number of physical cores the current instance is running on

Type

return

property num_sockets

The number of sockets the engine is compiled to run on

Type

return

run(inp: List[numpy.ndarray], val_inp: bool = True)List[numpy.ndarray][source]

Run given inputs through the model for inference. Returns the result as a list of numpy arrays corresponding to the outputs of the model as defined in the ONNX graph.

Note 1: the input dimensions must match what is defined in the ONNX graph. To avoid extra time in memory shuffles, the best use case is to format both the onnx and the input into channels first format; ex: [batch, height, width, channels] => [batch, channels, height, width]

Note 2: the input type for the numpy arrays must match what is defined in the ONNX graph. Generally float32 is most common, but int8 and int16 are used for certain layer and input types such as with quantized models.

Note 3: the numpy arrays must be contiguous in memory, use numpy.ascontiguousarray(array) to fix if not.

Example:
engine = Engine(“path/to/onnx”, batch_size=1, num_cores=None)
inp = [numpy.random.rand(1, 3, 224, 224).astype(numpy.float32)]
out = engine.run(inp)
assert isinstance(out, List)
Parameters
  • inp – The list of inputs to pass to the engine for inference. The expected order is the inputs order as defined in the ONNX graph.

  • val_inp – Validate the input to the model to ensure numpy array inputs are setup correctly for the DeepSparse Engine

Returns

The list of outputs from the model after executing over the inputs

timed_run(inp: List[numpy.ndarray], val_inp: bool = True)Tuple[List[numpy.ndarray], float][source]

Convenience method for timing a model inference run. Returns the result as a tuple containing (the outputs from @run, time take)

See @run for more details.

Example:
engine = Engine(“path/to/onnx”, batch_size=1, num_cores=None)
inp = [numpy.random.rand(1, 3, 224, 224).astype(numpy.float32)]
out, time = engine.timed_run(inp)
assert isinstance(out, List)
assert isinstance(time, float)
Parameters
  • inp – The list of inputs to pass to the engine for inference. The expected order is the inputs order as defined in the ONNX graph.

  • val_inp – Validate the input to the model to ensure numpy array inputs are setup correctly for the DeepSparse Engine

Returns

The list of outputs from the model after executing over the inputs

class deepsparse.engine.Scheduler(value)[source]

Bases: enum.Enum

Scheduler kinds to determine Engine execution strategy. For most synchronous cases, the default single_stream is recommended. For running a model server or parallel inferences, try multi_stream for maximum utilization of hardware.

  • default: maps to single_stream

  • single_stream: requests from separate threads execute serially

  • multi_stream: requests from separate threads execute in parallel

default = 'default'
static from_str(key: str)[source]
multi_stream = 'multi_stream'
single_stream = 'single_stream'
deepsparse.engine.analyze_model(model: Union[str, sparsezoo.objects.model.Model, sparsezoo.objects.file.File], inp: List[numpy.ndarray], batch_size: int = 1, num_cores: Optional[int] = None, num_iterations: int = 20, num_warmup_iterations: int = 5, optimization_level: int = 1, imposed_as: Optional[float] = None, imposed_ks: Optional[float] = None, num_sockets: Optional[int] = None, scheduler: Optional[deepsparse.engine.Scheduler] = None)dict[source]

Function to analyze a model’s performance in the DeepSparse Engine. The model must be defined in an ONNX graph and stored in a local file. Gives defaults of batch_size == 1 and num_cores == None (will use all physical cores available on a single socket).

Note 1: Analysis is currently only supported on a single socket.

Parameters
  • model – Either a path to the model’s onnx file, a SparseZoo model stub prefixed by ‘zoo:’, a SparseZoo Model object, or a SparseZoo ONNX File object that defines the neural network graph definition to analyze

  • inp – The list of inputs to pass to the engine for analyzing inference. The expected order is the inputs order as defined in the ONNX graph.

  • batch_size – The batch size of the inputs to be used with the model

  • num_cores – The number of physical cores to run the model on. Pass None or 0 to run on the max number of cores for the current machine; default None

  • num_iterations – The number of times to repeat execution of the model while analyzing, default is 20

  • num_warmup_iterations – The number of times to repeat execution of the model before analyzing, default is 5

  • optimization_level – The amount of graph optimizations to perform. The current choices are either 0 (minimal) or 1 (all), default is 1

  • imposed_as – Imposed activation sparsity, defaults to None. Will force the activation sparsity from all ReLu layers in the graph to match this desired sparsity level (percentage of 0’s in the tensor). Beneficial for seeing how AS affects the performance of the model.

  • imposed_ks – Imposed kernel sparsity, defaults to None. Will force all prunable layers in the graph to have weights with this desired sparsity level (percentage of 0’s in the tensor). Beneficial for seeing how pruning affects the performance of the model.

  • num_sockets – This parameter no longer functions and will be removed in a future version.

  • scheduler – The kind of scheduler to execute with. Pass None for the default.

Returns

the analysis structure containing the performance details of each layer

deepsparse.engine.benchmark_model(model: Union[str, sparsezoo.objects.model.Model, sparsezoo.objects.file.File], inp: List[numpy.ndarray], batch_size: int = 1, num_cores: Optional[int] = None, num_iterations: int = 20, num_warmup_iterations: int = 5, include_inputs: bool = False, include_outputs: bool = False, show_progress: bool = False, num_sockets: Optional[int] = None, scheduler: Optional[deepsparse.engine.Scheduler] = None)deepsparse.benchmark.BenchmarkResults[source]

Convenience function to benchmark a model in the DeepSparse Engine from an ONNX file for inference. Gives defaults of batch_size == 1 and num_cores == None (will use all physical cores available on a single socket).

Note 1: Benchmarking is currently only supported on a single socket.

Parameters
  • model – Either a path to the model’s onnx file, a SparseZoo model stub prefixed by ‘zoo:’, a SparseZoo Model object, or a SparseZoo ONNX File object that defines the neural network

  • batch_size – The batch size of the inputs to be used with the model

  • num_cores – The number of physical cores to run the model on. Pass None or 0 to run on the max number of cores for the current machine; default None

  • inp – The list of inputs to pass to the engine for benchmarking. The expected order is the inputs order as defined in the ONNX graph.

  • num_iterations – The number of iterations to run benchmarking for. Default is 20

  • num_warmup_iterations – T number of iterations to warm up engine before benchmarking. These executions will not be counted in the benchmark results that are returned. Useful and recommended to bring the system to a steady state. Default is 5

  • include_inputs – If True, inputs from forward passes during benchmarking will be added to the results. Default is False

  • include_outputs – If True, outputs from forward passes during benchmarking will be added to the results. Default is False

  • show_progress – If True, will display a progress bar. Default is False

  • num_sockets – This parameter no longer functions and will be removed in a future version.

  • scheduler – The kind of scheduler to execute with. Pass None for the default.

Returns

the results of benchmarking

deepsparse.engine.compile_model(model: Union[str, sparsezoo.objects.model.Model, sparsezoo.objects.file.File], batch_size: int = 1, num_cores: Optional[int] = None, num_sockets=None, scheduler: Optional[deepsparse.engine.Scheduler] = None)deepsparse.engine.Engine[source]

Convenience function to compile a model in the DeepSparse Engine from an ONNX file for inference. Gives defaults of batch_size == 1, num_cores == None (will use all physical cores available on the system).

Parameters
  • model – Either a path to the model’s onnx file, a SparseZoo model stub prefixed by ‘zoo:’, a SparseZoo Model object, or a SparseZoo ONNX File object that defines the neural network

  • batch_size – The batch size of the inputs to be used with the model

  • num_cores – The number of physical cores to run the model on. Pass None or 0 to run on the max number of cores for the current machine; default None

  • num_sockets – This parameter no longer functions and will be removed in a future version.

  • scheduler – The kind of scheduler to execute with. Pass None for the default.

Returns

The created Engine after compiling the model

deepsparse.generated-version module

deepsparse.generated_version module

deepsparse.lib module

deepsparse.lib.init_deepsparse_lib()[source]

deepsparse.version module

Functionality for storing and setting the version info for DeepSparse. If a file named ‘generated_version.py’ exists, read version info from there, otherwise fall back to defaults.

Module contents

The DeepSparse package used to achieve GPU class performance for Neural Networks on commodity CPUs.