Skip to content

Commit

Permalink
DOCS shift to rst - Tensorflow Frontend Capabilities and Limitations (o…
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel authored Mar 20, 2023
1 parent 083596e commit c5f65ee
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 22 deletions.
14 changes: 7 additions & 7 deletions docs/Documentation/inference_modes_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@
openvino_docs_OV_UG_Running_on_multiple_devices
openvino_docs_OV_UG_Hetero_execution
openvino_docs_OV_UG_Automatic_Batching

@endsphinxdirective

OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).

OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the :doc:`guide on inference devices <openvino_docs_OV_UG_Working_with_devices>`.

The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:

* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`

@endsphinxdirective
3 changes: 0 additions & 3 deletions docs/OV_Runtime_UG/img/BASIC_FLOW_IE_C.svg

This file was deleted.

17 changes: 8 additions & 9 deletions docs/OV_Runtime_UG/openvino_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,22 +14,21 @@
openvino_docs_OV_UG_ShapeInference
openvino_docs_OV_UG_DynamicShapes
openvino_docs_OV_UG_model_state_intro

@endsphinxdirective


OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.

OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.

The scheme below illustrates the typical workflow for deploying a trained deep learning model:

<!-- TODO: need to update the picture below with PDPD files -->
![](img/BASIC_FLOW_IE_C.svg)
The scheme below illustrates the typical workflow for deploying a trained deep learning model:


## Video
.. image:: _static/images/BASIC_FLOW_IE_C.svg


Video
####################

@sphinxdirective

.. list-table::

Expand All @@ -39,5 +38,5 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
src="https://www.youtube.com/embed/e6R13V8nbak">
</iframe>
* - **OpenVINO Runtime Concept**. Duration: 3:43

@endsphinxdirective
File renamed without changes
10 changes: 7 additions & 3 deletions docs/resources/tensorflow_frontend.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
# OpenVINO TensorFlow Frontend Capabilities and Limitations {#openvino_docs_MO_DG_TensorFlow_Frontend}

@sphinxdirective

TensorFlow Frontend is C++ based Frontend for conversion of TensorFlow models and is available as a preview feature starting from 2022.3.
That means that you can start experimenting with `--use_new_frontend` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models
or directly loading TensorFlow models through `read_model()` method.
That means that you can start experimenting with ``--use_new_frontend`` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models
or directly loading TensorFlow models through ``read_model()`` method.

The current limitations:

* IRs generated by new TensorFlow Frontend are compatible only with OpenVINO API 2.0
* There is no full parity yet between legacy Model Optimizer TensorFlow Frontend and new TensorFlow Frontend so primary path for model conversion is still legacy frontend
* Model coverage and performance is continuously improving so some conversion phase failures, performance and accuracy issues might occur in case model is not yet covered.
Known unsupported models: object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type and training parts
* `read_model()` method supports only `*.pb` format while Model Optimizer (or `convert_model` call) will accept other formats as well which are accepted by existing legacy frontend
* ``read_model()`` method supports only ``*.pb`` format while Model Optimizer (or ``convert_model`` call) will accept other formats as well which are accepted by existing legacy frontend

@endsphinxdirective

0 comments on commit c5f65ee

Please sign in to comment.