Skip to content

Commit

Permalink
Yuan transition guide restructure (openvinotoolkit#11778)
Browse files Browse the repository at this point in the history
* Add Overview page

* Revert "Add Overview page"

* restructure

* update

* updates

* update

* update

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <[email protected]>

* Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md

Co-authored-by: Karol Blaszczak <[email protected]>

* fix formatting

* fix formatting

Co-authored-by: Karol Blaszczak <[email protected]>
  • Loading branch information
xu-yuan1 and kblaszczak-intel authored Jun 2, 2022
1 parent fb09555 commit fc61b00
Show file tree
Hide file tree
Showing 6 changed files with 134 additions and 117 deletions.
92 changes: 50 additions & 42 deletions docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# Inference Pipeline {#openvino_2_0_inference_pipeline}

To infer models with OpenVINO™ Runtime, you usually need to perform the following steps in the application pipeline:
- 1. Create a Core object.
- 1.1. (Optional) Load extensions.
- 2. Read a model from a drive.
- 2.1. (Optional) Perform model preprocessing.
- 3. Load the model to the device.
- 4. Create an inference request.
- 5. Fill input tensors with data.
- 6. Start inference.
- 7. Process the inference results.
1. <a href="#create-core">Create a Core object.</a>
- 1.1. <a href="#load-extensions">(Optional) Load extensions.</a>
2. <a href="#read-model">Read a model from a drive.</a>
- 2.1. <a href="#perform-preprocessing">(Optional) Perform model preprocessing.</a>
3. <a href="#load-model-to-device">Load the model to the device.</a>
4. <a href="#create-inference-request">Create an inference request.</a>
5. <a href="#fill-tensor">Fill input tensors with data.</a>
6. <a href="#start-inference">Start inference.</a>
7. <a href="#process-results">Process the inference results.</a>

The following code explains how to change the application code for migration to OpenVINO™ Runtime 2.0.
Based on the steps, the following code demostrates how to change the application code to migrate to API 2.0.

## 1. Create Core
## <a name="create-core"></a>1. Create a Core Object

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand All @@ -29,7 +29,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand All @@ -43,11 +43,11 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

### 1.1 (Optional) Load Extensions
### <a name="load-extensions"></a>1.1 (Optional) Load Extensions

To load a model with custom operations, you need to add extensions for these operations. It is highly recommended to use [OpenVINO Extensibility API](../../Extensibility_UG/Intro.md) to write extensions. However, you can also load the old extensions to the new OpenVINO™ Runtime:

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand All @@ -61,7 +61,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand All @@ -75,9 +75,9 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

## 2. Read a Model from a Drive
## <a name="read-model"></a>2. Read a Model from a Drive

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand All @@ -91,7 +91,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand All @@ -105,18 +105,17 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

Read model has the same structure as the example from [Model Creation](./graph_construction.md) migration guide.
Reading a model has the same structure as the example in the [model creation migration guide](./graph_construction.md).

You can combine read and compile model stages into a single call `ov::Core::compile_model(filename, devicename)`.
You can combine reading and compiling a model into a single call `ov::Core::compile_model(filename, devicename)`.

### 2.1 (Optional) Perform Model Preprocessing
### <a name="perform-preprocessing"></a>2.1 (Optional) Perform Model Preprocessing

When application input data does not perfectly match the model input format, preprocessing may be necessary.
See the detailed guide on [how to migrate preprocessing in OpenVINO Runtime API 2.0](./preprocessing.md)
When the application input data does not perfectly match the model input format, preprocessing may be necessary. See [preprocessing in API 2.0](./preprocessing.md) for more details.

## 3. Load the Model to the Device
## <a name="load-model-to-device"></a>3. Load the Model to the Device

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand All @@ -130,7 +129,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand All @@ -144,11 +143,11 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

If you need to configure OpenVINO Runtime devices with additional parameters, refer to the [Configure devices](./configure_devices.md) guide.
If you need to configure devices with additional parameters for OpenVINO Runtime, refer to [Configuring Devices](./configure_devices.md).

## 4. Create an Inference Request
## <a name="create-inference-request"></a>4. Create an Inference Request

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand All @@ -162,7 +161,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand All @@ -176,9 +175,11 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

## 5. Fill Input Tensors
## <a name="fill-tensor"></a>5. Fill Input Tensors with Data

The Inference Engine API fills inputs as `I32` precision (**not** aligned with the original model):
**Inference Engine API**

The Inference Engine API fills inputs with data of the `I32` precision (**not** aligned with the original model):

@sphinxtabset

Expand Down Expand Up @@ -248,7 +249,9 @@ The Inference Engine API fills inputs as `I32` precision (**not** aligned with t

@endsphinxtabset

OpenVINO™ Runtime API 2.0 fills inputs as `I64` precision (aligned with the original model):
**API 2.0**

API 2.0 fills inputs with data of the `I64` precision (aligned with the original model):

@sphinxtabset

Expand Down Expand Up @@ -318,9 +321,9 @@ OpenVINO™ Runtime API 2.0 fills inputs as `I64` precision (aligned with the or

@endsphinxtabset

## 6. Start Inference
## <a name="start-inference"></a>6. Start Inference

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand Down Expand Up @@ -358,7 +361,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO™ Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand Down Expand Up @@ -396,9 +399,11 @@ OpenVINO™ Runtime API 2.0:

@endsphinxtabset

## 7. Process the Inference Results
## <a name="process-results"></a>7. Process the Inference Results

**Inference Engine API**

The Inference Engine API processes outputs as `I32` precision (**not** aligned with the original model):
The Inference Engine API processes outputs as they are of the `I32` precision (**not** aligned with the original model):

@sphinxtabset

Expand Down Expand Up @@ -468,9 +473,12 @@ The Inference Engine API processes outputs as `I32` precision (**not** aligned w

@endsphinxtabset

OpenVINO™ Runtime API 2.0 processes outputs:
- For IR v10 as `I32` precision (**not** aligned with the original model) to match the **old** behavior.
- For IR v11, ONNX, ov::Model, Paddle as `I64` precision (aligned with the original model) to match the **new** behavior.
**API 2.0**

API 2.0 processes outputs:

- as they are of the `I32` precision (**not** aligned with the original model) for OpenVINO IR v10 models, to match the <a href="openvino_2_0_transition_guide#differences-api20-ie">old behavior</a>.
- as they are of the `I64` precision (aligned with the original model) for OpenVINO IR v11, ONNX, ov::Model and PaddlePaddle models, to match the <a href="openvino_2_0_transition_guide#differences-api20-ie">new behavior</a>.

@sphinxtabset

Expand Down
20 changes: 9 additions & 11 deletions docs/OV_Runtime_UG/migration_ov_2_0/configure_devices.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,22 @@
# Configuring Devices {#openvino_2_0_configure_devices}

### Introduction

Inference Engine API provides the [ability to configure devices](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html) via configuration keys and [get device specific metrics](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html#getmetric). The values taken from `InferenceEngine::Core::GetConfig` are requested by the string name, while the return type is `InferenceEngine::Parameter`, making users lost on what the actual type is stored in this parameter.

OpenVINO Runtime API 2.0 solves these issues by introducing [properties](../supported_plugins/config_properties.md), which unify metrics and configuration key concepts. Their main advantage is that they have the C++ type:
API 2.0 solves these issues by introducing [properties](../supported_plugins/config_properties.md), which unify metrics and configuration key concepts. The main advantage is that they have the C++ type:

```
static constexpr Property<std::string> full_name{"FULL_DEVICE_NAME"};
```

The property can be requested from an inference device as:
where the property can be requested from an inference device as:

@snippet ov_properties_migration.cpp core_get_ro_property

The snippets below explain how to migrate from an Inference Engine device configuration to OpenVINO Runtime API 2.0.
The snippets in the following sections demostrate the device configurations for migrating from Inference Engine to API 2.0.

### Set configuration values
## Setting Configuration Values

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand Down Expand Up @@ -76,7 +74,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand Down Expand Up @@ -134,9 +132,9 @@ OpenVINO Runtime API 2.0:

@endsphinxtabset

### Get Information
## Getting Information

Inference Engine API:
**Inference Engine API**

@sphinxtabset

Expand Down Expand Up @@ -206,7 +204,7 @@ Inference Engine API:

@endsphinxtabset

OpenVINO Runtime API 2.0:
**API 2.0**

@sphinxtabset

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Installation & Deployment {#openvino_2_0_deployment}

One of the main concepts for OpenVINO™ API 2.0 is being *"easy to use"*, which includes:
One of the main concepts for OpenVINO™ API 2.0 is being "easy to use", which includes:
* Simplification of migration from different frameworks to OpenVINO.
* Organization of OpenVINO.
* Usage of development tools.
Expand Down
14 changes: 8 additions & 6 deletions docs/OV_Runtime_UG/migration_ov_2_0/graph_construction.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
# Model Creation in Runtime {#openvino_2_0_model_creation}
# Model Creation in OpenVINO™ Runtime {#openvino_2_0_model_creation}

OpenVINO™ Runtime API 2.0 includes the nGraph engine as a common part. The `ngraph` namespace has been changed to `ov`, but all other parts of the ngraph API have been preserved.
The code snippets below show how to change application code for migration to OpenVINO™ Runtime API 2.0.
OpenVINO™ Runtime with API 2.0 includes the nGraph engine as a common part. The `ngraph` namespace has been changed to `ov`, but all other parts of the ngraph API have been preserved.

### nGraph API
The code snippets below show how to change the application code for migration to API 2.0.

## nGraph API

@sphinxtabset

Expand All @@ -17,7 +18,7 @@ The code snippets below show how to change application code for migration to Ope

@endsphinxtabset

### OpenVINO™ Runtime API 2.0:
## API 2.0

@sphinxtabset

Expand All @@ -31,6 +32,7 @@ The code snippets below show how to change application code for migration to Ope

@endsphinxtabset

**See also:**
## Additional Resources

- [Hello Model Creation C++ Sample](../../../samples/cpp/model_creation_sample/README.md)
- [Hello Model Creation Python Sample](../../../samples/python/model_creation_sample/README.md)
Loading

0 comments on commit fc61b00

Please sign in to comment.