Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Commit

Permalink
Disable Myriad Plugin (#415)
Browse files Browse the repository at this point in the history
* Disable Myriad Plugin

* Changes to MYD documentation

* Disable Myriad plugin for OpenVINO source builds

* Add back required file plugins.xml
  • Loading branch information
sspintel authored Jan 16, 2023
1 parent 8c7c864 commit bdb9afa
Show file tree
Hide file tree
Showing 11 changed files with 30 additions and 57 deletions.
17 changes: 7 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow accelerates** inference across many AI models on a variety of Intel<sup>®</sup> silicon such as:

- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred to as VPU
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL
- Intel<sup>®</sup> integrated and discrete GPUs

Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs.

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

Expand All @@ -33,8 +33,7 @@ Check our [Interactive Installation Table](https://openvinotoolkit.github.io/ope

The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately. This package supports:
- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
- Intel<sup>®</sup> integrated and discrete GPUs


pip3 install -U pip
Expand All @@ -46,8 +45,6 @@ For installation instructions on Windows please refer to [**OpenVINO™ integrat

To use Intel<sup>®</sup> integrated GPUs for inference, make sure to install the [Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)

To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](./docs/INSTALL.md#install-openvino-integration-with-tensorflow-pypi-release-alongside-the-intel-distribution-of-openvino-toolkit-for-vad-m-support).

For more details on installation please refer to [INSTALL.md](docs/INSTALL.md), and for build from source options please refer to [BUILD.md](docs/BUILD.md)

## Configuration
Expand All @@ -68,11 +65,11 @@ This should produce an output like:

CXX11_ABI flag used for this build: 1

By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to Intel<sup>®</sup> integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'.
Supported backends include 'CPU', 'GPU', 'GPU_FP16'

To determine what processing units are available on your system for inference, use the following function:

Expand All @@ -85,7 +82,7 @@ For further performance improvements, it is advised to set the environment varia
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](./examples) directory.

## Docker Support
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU, VPU, and VAD-M.
Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for **OpenVINO™ integration with TensorFlow** on CPU, GPU.
For more details see [docker readme](docker/README.md).

### Prebuilt Images
Expand Down
4 changes: 0 additions & 4 deletions README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@

- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)
- 支持 8 颗英特尔 Movidius™ MyriadX VPU 的英特尔<sup>®</sup> 视觉加速器设计(称作 VAD-M 或 HDDL)

[注:为实现最佳的性能、效率、工具定制和硬件控制,我们建议开发人员使用原生 OpenVINO™ API 及其运行时。]

Expand All @@ -34,7 +32,6 @@
**OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2022.3.0 版本的预建库,用户无需单独安装 OpenVINO™。该安装包支持:
- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)


pip3 install -U pip
Expand All @@ -45,7 +42,6 @@

如果您想使用Intel<sup>®</sup> 集成显卡进行推理,请确保安装[Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)

如果您想使用支持 Movidius™ (VAD-M)进行推理的英特尔® 视觉加速器设计 (VAD-M) 进行推理,请安装 [**OpenVINO™ integration with TensorFlow** 以及英特尔® OpenVINO™ 工具套件发布版](docs/INSTALL_cn.md#安装-openvino-integration-with-tensorflow-pypi-发布版与独立安装intel-openvino-发布版以支持vad-m)

更多安装详情,请参阅 [INSTALL.md](docs/INSTALL_cn.md), 更多源构建选项请参阅 [BUILD.md](docs/BUILD_cn.md)

Expand Down
8 changes: 4 additions & 4 deletions docs/INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

### Install **OpenVINO™ integration with TensorFlow** PyPi release
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs

pip3 install -U pip
pip3 install tensorflow==2.9.3
Expand All @@ -19,7 +19,7 @@

### Install **OpenVINO™ integration with TensorFlow** PyPi release alongside the Intel® Distribution of OpenVINO™ Toolkit for VAD-M Support
* Compatible with OpenVINO™ version 2022.3.0
* Supports Intel<sup>®</sup> Vision Accelerator Design with Movidius™ (VAD-M), it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)
* Supports it also supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs
* To use it:
1. Install tensorflow and openvino-tensorflow packages from PyPi as explained in the section above
2. Download & install Intel® Distribution of OpenVINO™ Toolkit 2022.3.0 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html)).
Expand All @@ -32,7 +32,7 @@

Install **OpenVINO™ integration with TensorFlow** PyPi release
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup>, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs

pip3 install -U pip
pip3 install tensorflow==2.9.3
Expand All @@ -44,7 +44,7 @@
Install **OpenVINO™ integration with TensorFlow** PyPi release alongside TensorFlow released in Github
* TensorFlow wheel for Windows from PyPi does't have all the API symbols enabled which are required for **OpenVINO™ integration with TensorFlow**. User needs to install the TensorFlow wheel from the assets of the Github release page
* Includes pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated GPUs, and Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs). No VAD-M support
* Supports Intel<sup>®</sup> CPUs, Intel<sup>®</sup> integrated and discrete GPUs

pip3.9 install -U pip
pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl
Expand Down
8 changes: 4 additions & 4 deletions docs/INSTALL_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版
* 包括 Intel<sup>®</sup> OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装 OpenVINO™。
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和

pip3 install -U pip
pip3 install tensorflow==2.9.3
Expand All @@ -18,7 +18,7 @@

### 安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版与独立安装Intel® OpenVINO™ 发布版以支持VAD-M
* 兼容 Intel<sup>®</sup> OpenVINO™ 2022.3.0版本
* 支持 Intel<sup>®</sup> Movidius™ (VAD-M) 的视觉加速器设计 同时支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU、Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU)。
* 支持 的视觉加速器设计 同时支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU
* 使用方法:
1. 按照上述方法从PyPi安装tensorflow 和 openvino-tensorflow。
2. 下载安装Intel<sup>®</sup> OpenVINO™ 2022.3.0发布版,一并安装其依赖([https://software.intel.com/en-us/openvino-toolkit/download](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html))。
Expand All @@ -31,7 +31,7 @@

安装 **OpenVINO™ integration with TensorFlow** PyPi 发布版
* 包括 Intel<sup>®</sup> OpenVINO™ 2022.3.0 版的预建库,用户无需单独安装Intel<sup>®</sup> OpenVINO™
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和

pip3 install -U pip
pip3 install tensorflow==2.9.3
Expand All @@ -43,7 +43,7 @@
安装 **OpenVINO™ integration with TensorFlow** PyPi 版本与独立安装TensorFlow Github版本
* 基于Windows 的TensorFlow PyPi 安装版并没有使能 **OpenVINO™ integration with TensorFlow** 需要的所有API。用户需要从Github 发布中安装TensorFlow wheel。
* 包括 OpenVINO™ 2022.3.0 版的预建库。 用户无需单独安装 Intel<sup>®</sup> OpenVINO™ 。
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和 Intel<sup>®</sup> Movidius™ 视觉处理单元 (VPU),但不支持 VAD-M。
* 支持 Intel<sup>®</sup> CPU、Intel<sup>®</sup> 集成 GPU 和

pip3.9 install -U pip
pip3.9 install https://github.com/openvinotoolkit/openvino_tensorflow/releases/download/v2.2.0/tensorflow-2.9.2-cp39-cp39-win_amd64.whl
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "1s7OK7vW3put"
Expand All @@ -19,8 +20,7 @@
"\n",
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
"* Intel® CPUs\n",
"* Intel® integrated GPUs\n",
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
"* Intel® integrated and discrete GPUs\n",
"\n",
"**Overview**\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "atwwZdgc3d3_"
Expand All @@ -21,9 +22,7 @@
"\n",
"OpenVINO™ integration with TensorFlow is designed for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. This product effectively delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as: \n",
"* Intel® CPUs\n",
"* Intel® integrated GPUs\n",
"* Intel® Movidius™ Vision Processing Units - referred to as VPU\n",
"* Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL\n",
"* Intel® integrated and discrete GPUs\n",
"\n",
"**Overview**\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,15 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "898d9206",
"metadata": {},
"source": [
"[OpenVINO™ integration with TensorFlow](https://github.com/openvinotoolkit/openvino_tensorflow) is designed for TensorFlow developers who want to get started with [OpenVINO™](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) in their inferencing applications. This product delivers OpenVINO™ inline optimizations, which enhance inferencing performance of popular deep learning models with minimal code changes and without any accuracy drop. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:\n",
"\n",
" - Intel® CPUs\n",
" - Intel® integrated GPUs\n",
" - Intel® Movidius™ Vision Processing Units - referred to as VPU"
" - Intel® integrated and discrete GPUs"
]
},
{
Expand Down
9 changes: 1 addition & 8 deletions openvino_tensorflow/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -145,9 +145,6 @@ if (APPLE)
endif()

set(IE_LIBS_PATH ${OPENVINO_ARTIFACTS_DIR}/runtime/lib/intel64/${CMAKE_BUILD_TYPE})
set(IE_LIBS
"${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd"
)
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/)
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${TF_CONVERSION_EXTENSIONS_MODULE_NAME}/${TF_CONVERSION_EXTENSIONS_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
Expand All @@ -161,7 +158,6 @@ elseif(WIN32)
set (IE_LIBS
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}"
"${IE_LIBS_PATH}/cache.json"
"${IE_LIBS_PATH}/pcie-ma2x8x.elf"
)
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/bin/)
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${CMAKE_BUILD_TYPE}/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
Expand All @@ -183,7 +179,6 @@ else()
set (IE_LIBS
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_gpu_plugin.${PLUGIN_LIB_EXT}"
"${IE_LIBS_PATH}/cache.json"
"${IE_LIBS_PATH}/pcie-ma2x8x.mvcmd"
)
set(TBB_LIBS ${OPENVINO_ARTIFACTS_DIR}/runtime/3rdparty/tbb/lib/)
install(FILES ${CMAKE_INSTALL_PREFIX}/../ocm/OCM/${OCM_LIB} DESTINATION "${OVTF_INSTALL_LIB_DIR}")
Expand All @@ -200,9 +195,7 @@ set (IE_LIBS
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_c.${OV_LIB_EXT_DOT}"
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_tensorflow_frontend.${OV_LIB_EXT_DOT}"
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_cpu_plugin.${PLUGIN_LIB_EXT}"
"${IE_LIBS_PATH}/${LIB_PREFIX}openvino_intel_myriad_plugin.${PLUGIN_LIB_EXT}"
"${IE_LIBS_PATH}/usb-ma2x8x.mvcmd"
"${IE_LIBS_PATH}/plugins.xml"
"${IE_LIBS_PATH}/plugins.xml"
)

# Install Openvino and TBB libraries
Expand Down
10 changes: 0 additions & 10 deletions python/CreatePipWhl.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,8 @@ if (PYTHON)
if (APPLE)
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugind.so")
set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugind.so")
else()
set(libMKLDNNPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_cpu_plugin.so")
set(libmyriadPluginPath "${CMAKE_CURRENT_BINARY_DIR}/python/openvino_tensorflow/libopenvino_intel_myriad_plugin.so")
endif()

# libMKLDNNPluginPath
Expand All @@ -111,14 +109,6 @@ if (PYTHON)
endif()

# libmyriadPluginPath
execute_process(COMMAND
install_name_tool -add_rpath
@loader_path
${libmyriadPluginPath}
RESULT_VARIABLE result
ERROR_VARIABLE ERR
ERROR_STRIP_TRAILING_WHITESPACE
)
if(${result})
message(FATAL_ERROR "Cannot add rpath")
endif()
Expand Down
Loading

0 comments on commit bdb9afa

Please sign in to comment.