Benchmarking
You can benchmark the model to get measured (rather than estimated) values. After running a benchmark, you might want to change optimization values and run the new benchmark.
You can also use the benchmark to compare the results from two engines. This will enable you to determine which engine is appropriate for your business criteria.
Note: If you have not yet run a benchmark, you can continue by doing so as described below. If you already ran a benchmark(s), it will be displayed.
Running the Benchmark of a Single Inference Engine
Click ADD BENCHMARK.
Create a benchmark by indicating the inference engine to use for the benchmark, a benchmark that you want to compare to (optional, as described in Running a Comparative Benchmark, optimization version (optional), core counts, and batch sizes.
Note: Selecting the inference engine as ONNX Runtime CPU disables core count comparison.
Note: You can select multiple core counts and/or batch sizes by checking the Enable Multiple option. For example:
Click RUN.
Using the specified engine, a benchmark is run for the batch size(s) and core count(s) that you selected.
Then, the Metric information is displayed. Here is an example of a benchmark with three batch sizes and three core counts specified:
If you run a benchmark with multiple batch sizes and/or core counts, all of the scenarios are examined. You can display the resulting information based on the baseline, core scaling, or batch size scaling. Here is an example of baseline information:
You can select the cores and/or batch size on the right to see different scenarios.
If you display based on core scaling, the information shows the average values at different core counts with a fixed batch size.
You can change the batch size with the drop-down on the right to see the different scenarios. For example:
If you display based on batch size scaling, the information shows the average values at different batch sizes with a fixed core count.
You can change the core count with the drop-down on the right to see the different scenarios. For example:
As you add benchmarks, they are displayed starting with the most recent and sorted in chronological order. For example:
Running a Comparative Benchmark
You can run a benchmark and indicate that you want to compare it to another, existing benchmark.
The benchmark shows a comparative graph.
You can display based on either engine:
Re-Running a Benchmark
You can re-run a specific benchmark using the same conditions used for the original run. This will create a new benchmark with the same settings.
Click the menu button and then Re-run Benchmark.