If you are running object detection model eg. MobileNet or Yolo, they usually require smaller frame for inferencing (eg. 300x300
or 416x416
). Instead of displaying bounding boxes on such small frames, you could also stream higher resolution frames (eg. video
output from ColorCamera) and display bounding boxes on these frames. There are several approaches to achieving that, and in this demo we will go through them.
Just using the small inferencing frame. Here we used passthrough
frame of MobileNetDetectionNetwork's output so bounding boxes are in sync with the frame. Other option would be to stream preview
frames from ColorCamera
and sync on the host (or don't sync at all). 300x300
frame, for reference:
A simple solution to low resolution frame is to stream high resolution frames (eg. video
output from ColorCamera
) to the host, and draw bounding boxes to it. For bounding boxes to match the frame, preview
and video
sizes should have the same aspect ratio, so 1:1
. In the example, we downscale 4k resolution to 720P
, so maximum resolution is 720x720
, which is exactly the resolution we used (camRgb.setVideoSize(720,720)
). We could also use 1080P
resolution and stream 1080x1080
frames back to the host.
A problem that we often encounter with models is that their aspect ratio is 1:1
, not eg. 16x9
as our camera resolution. This means that some of the FOV will be lost. In our How to maximize FOV tutorial we showcased that changing aspect ratio will preserve the whole aspect ratio of the camera, but it will "squeeze"/"stretch" the frame, as you can see below.
To avoid stretching the frame (as it can have an affect on NN accuracy), we could also stream full FOV video
from the device and do inferencing on 300x300
frames. This would, however, mean that we have to re-calculate bounding boxes to match with different aspect ratio of the image. This approach does not preserve the whole aspect ratio, it only displays bounding boxes on whole FOV video
frames.
python3 -m pip install -r requirements.txt