Skip to content

Files

Latest commit

 

History

History
 
 

depthai-inference-engine

DepthAI with Inference Engine

This example shows how you can run inference on your DepthAI device using OpenVINOs Inference Engine. This example does not use the DepthAI library (so there are no pipelines) as it runs the DepthAI device in NCS (Neural Compute Stick) mode.

A common use case to run your model with IE (Inference Engine) first is to check if your model conversion to OpenVINOs IR format (eg. from TF/ONNX) was successful. After you run it successfully with the IE you can then proceed with compiling the IR model into the .blob, which is required by the DepthAI library.

The NN model (facial cartoonization) was taken from PINTOs model-zoo.

Demo

demo

Installation

python3 -m pip install -r requirements.txt

Usage

python3 main.py