Skip to content

Commit

Permalink
Port 2021.1 documentation updates for GNA plugin and speech libs and …
Browse files Browse the repository at this point in the history
…demos. (openvinotoolkit#2564)

* Update docs for speech libs and demos (openvinotoolkit#2518)

* [GNA] Documentation updates for 2021.1 (openvinotoolkit#2460)

* [GNA] Documentation updates for 2021.1

* Take Mike's comments into account

* More fixes according to review

* Fix processor generation names
  • Loading branch information
dorloff authored Oct 7, 2020
1 parent e9fde8f commit 1cc25fc
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 10 deletions.
37 changes: 32 additions & 5 deletions docs/IE_DG/supported_plugins/GNA.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,18 @@ Devices with Intel® GNA support:

* [Amazon Alexa* Premium Far-Field Developer Kit](https://developer.amazon.com/en-US/alexa/alexa-voice-service/dev-kits/amazon-premium-voice)

* [Gemini Lake](https://ark.intel.com/content/www/us/en/ark/products/codename/83915/gemini-lake.html):
* [Intel® Pentium® Silver Processors N5xxx, J5xxx and Intel® Celeron® Processors N4xxx, J4xxx](https://ark.intel.com/content/www/us/en/ark/products/codename/83915/gemini-lake.html):
- Intel® Pentium® Silver J5005 Processor
- Intel® Pentium® Silver N5000 Processor
- Intel® Celeron® J4005 Processor
- Intel® Celeron® J4105 Processor
- Intel® Celeron® Processor N4100
- Intel® Celeron® Processor N4000

* [Cannon Lake](https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html):
* [Intel® Core™ Processors (formerly codenamed Cannon Lake)](https://ark.intel.com/content/www/us/en/ark/products/136863/intel-core-i3-8121u-processor-4m-cache-up-to-3-20-ghz.html):
Intel® Core™ i3-8121U Processor

* [Ice Lake](https://ark.intel.com/content/www/us/en/ark/products/codename/74979/ice-lake.html):
* [10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/74979/ice-lake.html):
- Intel® Core™ i7-1065G7 Processor
- Intel® Core™ i7-1060G7 Processor
- Intel® Core™ i5-1035G4 Processor
Expand All @@ -42,6 +42,8 @@ Intel® Core™ i3-8121U Processor
- Intel® Core™ i3-1000G1 Processor
- Intel® Core™ i3-1000G4 Processor

* All [11th Generation Intel® Core™ Processors (formerly codenamed Tiger Lake)](https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html).

> **NOTE**: On platforms where Intel® GNA is not enabled in the BIOS, the driver cannot be installed, so the GNA plugin uses the software emulation mode only.
## Drivers and Dependencies
Expand All @@ -64,10 +66,11 @@ The list of supported layers can be found
[here](Supported_Devices.md) (see the GNA column of Supported Layers section).
Limitations include:

- Only 1D convolutions (in the models converted from [Kaldi](../../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md) framework) are natively supported
- Only 1D convolutions are natively supported in the models converted from:
- [Kaldi](../../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md) framework;
- [TensorFlow](../../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md) framework; note that for TensorFlow models, the option `--disable_nhwc_to_nchw` must be used when running the Model Optimizer.
- The number of output channels for convolutions must be a multiple of 4
- Permute layer support is limited to the cases where no data reordering is needed, or when reordering is happening for 2 dimensions, at least one of which is not greater than 8
- Power layer only supports the power parameter equal to 1

#### Experimental Support for 2D Convolutions

Expand Down Expand Up @@ -159,6 +162,30 @@ Heterogeneous plugin was tested with the Intel® GNA as a primary device and

> **NOTE:** Due to limitation of the Intel® GNA backend library, heterogenous support is limited to cases where in the resulted sliced graph, only one subgraph is scheduled to run on GNA\_HW or GNA\_SW devices.
## Recovery from interruption by high-priority Windows audio processes\*

As noted in the introduction, GNA is designed for real-time workloads such as noise reduction.
For such workloads, processing should be time constrained, otherwise extra delays may cause undesired effects such as
audio "glitches". To make sure that processing can satisfy real time requirements, the GNA driver provides a QoS
(Quality of Service) mechanism which interrupts requests that might cause high-priority Windows audio processes to miss
schedule, thereby causing long running GNA tasks to terminate early.

Applications should be prepared for this situation.
If an inference (in `GNA_HW` mode) cannot be executed because of such an interruption, then `InferRequest::Wait()` will return status code
`StatusCode::INFER_NOT_STARTED` (note that it will be changed to a more meaningful status code in future releases).

Any application working with GNA must properly react if it receives this code. Various strategies are possible.
One of the options is to immediately switch to GNA SW emulation mode:

```cpp
std::map<std::string, Parameter> newConfig;
newConfig[GNAConfigParams::KEY_GNA_DEVICE_MODE] = Parameter("GNA_SW_EXACT");
executableNet.SetConfig(newConfig);

```

then resubmit and switch back to GNA_HW after some time hoping that the competing application has finished.

## See Also

* [Supported Devices](Supported_Devices.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/supported_plugins/Supported_Devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ the supported output precision depends on the actual underlying devices. _Gener
|GPU plugin |Supported |Supported |Supported |Supported |
|FPGA plugin |Not supported |Supported |Supported |Not supported |
|VPU plugins |Not supported |Supported |Supported |Supported |
|GNA plugin |Not supported |Not supported |Not supported |Supported |
|GNA plugin |Not supported |Supported |Supported |Supported |

### Supported Output Layout

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Speech Library and Speech Recognition Demos {#openvino_inference_engine_samples_speech_libs_and_demos_Speech_libs_and_demos}

Starting with the 2020.1 release, OpenVINO&trade; provides a set of libraries and demos to demonstrate end-to-end
speech recognition, as well as new acoustic and language models that can work with these demos.
Intel® distributions of OpenVINO&trade; toolkit for Linux* OS and Windows* OS provide a set of libraries and demos to
demonstrate end-to-end speech recognition, as well as new acoustic and language models that can work with these demos.
The libraries are designed for preprocessing (feature extraction) to get a feature vector from a speech signal, as well
as postprocessing (decoding) to produce text from scores. Together with OpenVINO&trade;-based neural-network speech recognition,
these libraries provide an end-to-end pipeline converting speech to text. This pipeline is demonstrated by the
Expand Down Expand Up @@ -38,7 +38,7 @@ Additionally, [new acoustic and language models](http://download.01.org/opencv/2

To download pretrained models and build all dependencies:

* On Linux* OS or macOS*, use the shell script `<INSTALL_DIR>/deployment_tools/demo/demo_speech_recognition.sh`
* On Linux* OS, use the shell script `<INSTALL_DIR>/deployment_tools/demo/demo_speech_recognition.sh`

* On Windows* OS, use the batch file `<INSTALL_DIR>\deployment_tools\demo\demo_speech_recognition.bat`

Expand All @@ -51,7 +51,7 @@ The script follows the steps below:

If you are behind a proxy, set the following environment variables in a console session before running the script:

* On Linux* OS and macOS*:
* On Linux* OS:

```sh
export http_proxy=http://{proxyHost}:{proxyPort}
Expand Down

0 comments on commit 1cc25fc

Please sign in to comment.