Skip to content

Commit 7e709a0

Browse files
authored
benchmark: remove deprecation notice (openvinotoolkit#20175)
Python version didn't mark -api as deprecated
1 parent affceaa commit 7e709a0

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

samples/cpp/benchmark_app/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ Running the application with the ``-h`` or ``--help`` option yields the followin
190190
-c <absolute_path> Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.
191191
-cache_dir <path> Optional. Enables caching of loaded models to specified directory. List of devices which support caching is shown at the end of this message.
192192
-load_from_file Optional. Loads model from file directly without read_model. All CNNNetwork options (like re-shape) will be ignored
193-
-api <sync/async> Optional (deprecated). Enable Sync/Async API. Default value is "async".
193+
-api <sync/async> Optional. Enable Sync/Async API. Default value is "async".
194194
-nireq <integer> Optional. Number of infer requests. Default value is determined automatically for device.
195195
-nstreams <integer> Optional. Number of streams to use for inference on the CPU or GPU devices (for HETERO and MULTI device cases use format <dev1>:<nstreams1>, <dev2>:<nstreams2> or just <nstreams>). Default value is determined automatically for a device.Please note that although the automatic selection usually provides a reasonable performance, it still may be non - optimal for some cases, especially for very small models. See sample's README for more details. Also, using nstreams>1 is inherently throughput-oriented option, while for the best-latency estimations the number of streams should be set to 1.
196196
-inference_only Optional. Measure only inference stage. Default option for static models. Dynamic models are measured in full mode which includes inputs setup stage, inference only mode available for them with single input data shape only. To enable full mode for static models pass "false" value to this argument: ex. "-inference_only=false".

samples/cpp/benchmark_app/benchmark_app.hpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ static const char layout_message[] =
9696
"For example, \"input1[NCHW],input2[NC]\" or \"[NCHW]\" in case of one input size.";
9797

9898
/// @brief message for execution mode
99-
static const char api_message[] = "Optional (deprecated). Enable Sync/Async API. Default value is \"async\".";
99+
static const char api_message[] = "Optional. Enable Sync/Async API. Default value is \"async\".";
100100

101101
/// @brief message for #streams for CPU inference
102102
static const char infer_num_streams_message[] =

0 commit comments

Comments
 (0)