Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
tonyreina authored Dec 27, 2020
1 parent 2d2f179 commit e802c84
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions 2D/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,12 @@ python $INTEL_OPENVINO_DIR/deployment_tools/model_optimizer/mo_tf.py \
9. Once you have the OpenVINO™ IR model, you can run the command:

```
python plot_openvino_inference_examples.py
python plot_openvino_inference_examples.py --device CPU
```

It should give you the same output as the `plot_tf_inference_examples.py` but execute faster on the same CPU.
It should give you the same output as the `plot_tf_inference_examples.py` but execute faster on the same CPU. You can try the options `--device GPU` or `--device=MYRIAD` if you have the [Intel® integrated GPU](https://ark.intel.com/content/www/us/en/ark/products/graphics/197532/intel-iris-plus-graphics.html) or [Intel® Neural Compute Stick™ (NCS2)](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) installed on your computer.

For a complete demo showing the [Intel® Neural Compute Stick™ (NCS2)](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) try out the [Intel® DevCloud for Edge](https://devcloud.intel.com/edge/advanced/sample_applications/). You'll be able to try running inference on lots of Intel® hardware using the same OpenVINO™ pipeline.

![prediction28](images/pred28.png)

Expand Down

0 comments on commit e802c84

Please sign in to comment.