Skip to content

Latest commit

 

History

History
 
 

gen2-pedestrian-reidentification

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

中文文档

Pedestrian reidentification

This example demonstrates how to run 2 stage inference on DepthAI. It runs person-detection-retail-0013 model and person-reidentification-retail-0288 on the device itself.

Original OpenVINO demo, on which this example was made, is here.

Demo

Pedestrian Re-Identification

How it works

  1. Color camera produces high-res frames, sends them to host, Script node and downscale ImageManip node
  2. Downscale ImageManip will downscale from high-res frame to 544x320, required by 1st NN in this pipeline; object detection model person-detection-retail-0013 model
  3. 544x320 frames are sent from downscale ImageManip node to the object detection model (MobileNetSpatialDetectionNetwork)
  4. Object detections are sent to the Script node
  5. Script node first syncs object detections msg with frame. It then goes through all detections and creates ImageManipConfig for each detected face. These configs then get sent to ImageManip together with synced high-res frame
  6. ImageManip will crop only the face out of the original frame. It will also resize the face frame to required size (128,256) by the person-reidentification NN model
  7. Face frames get send to the 2nd NN - person-reidentification NN model. NN results are sent back to the host
  8. Frames, object detections, and reidentification results are all synced on the host side and then displayed to the user

2-stage NN pipeline graph

image

DepthAI Pipeline Graph was used to generate this image.

Pre-requisites

Install requirements:

python3 -m pip install -r requirements.txt

Usage

python main.py