• DepthAI
  • Latency measurement of different FPS

Hi!

I recently started working with the OAK-1 AF (USB). After having run the latency measurement code (https://docs.luxonis.com/projects/api/en/latest/samples/host_side/latency_measurement/#latency-measurement) successfully, I got the expected average latency between 30-40 ms with the FPS set to 10 and 60.

Next I combined this latency code with the code including yolo (examples/NeuralNetwork/detection_parser.py) and run it with 10 and 60 FPS, resulting in very different latencies:

Do you have an idea, why the latency is so much higher with higher FPS?

In my understanding the reading of the sensor should take the same time regardless of the FPS and especially Yolo's latency should stay the same since the image size does not change. Am I missing something completely?

Code:

from pathlib

import Path

import cv2

import depthai as dai

import numpy as np

import argparse

nnPath = str((Path(file).parent / Path('/Users/lasse/depthai-python/examples//models/mobilenet-ssd_openvino_2021.4_6shave.blob')).resolve().absolute())

labelMap = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow","diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

Set up piplines and linking them

Create pipeline

pipeline = dai.Pipeline()

This might improve reducing the latency on some systems

pipeline.setXLinkChunkSize(0)

Define source and output

camRgb = pipeline.create(dai.node.ColorCamera)nn = pipeline.create(dai.node.NeuralNetwork)det = pipeline.create(dai.node.DetectionParser)

xoutRgb = pipeline.create(dai.node.XLinkOut)xoutRgb.setStreamName("rgb")nnOut = pipeline.create(dai.node.XLinkOut)nnOut.setStreamName("nn")

nn.passthrough.link(xoutRgb.input)

camRgb.setFps(60)camRgb.setPreviewSize(300, 300)camRgb.setInterleaved(False)camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)camRgb.isp.link(xoutRgb.input)

Define a neural network that will make predictions based on the source frames

nn.setNumInferenceThreads(2)nn.input.setBlocking(False)

blob = dai.OpenVINO.Blob(nnPath)nn.setBlob(blob)det.setBlob(blob)det.setNNFamily(dai.DetectionNetworkType.MOBILENET)det.setConfidenceThreshold(0.5)

camRgb.preview.link(nn.input)nn.out.link(det.input)det.out.link(nnOut.input)

Connect to device and start pipeline

with dai.Device(pipeline) as device:

print(device.getUsbSpeed(), ' - FPS set to ', camRgb.getFps())

# Set queues maSize to 1 and blocking to false -> I am only interested in the most recent enrty
qRgb = device.getOutputQueue(name="rgb", maxSize=1, blocking=False)
qDet = device.getOutputQueue(name="nn", maxSize=1, blocking=False)

detections = []
N = 10
count = 0

diffs_img = np.array([])
diffs_nn = np.array([])

while count < N:

    # get image from queue
    imgFrame = qRgb.get()
    latencyMs = (dai.Clock.now() - imgFrame.getTimestamp()).total_seconds() * 1000
    diffs_img = np.append(diffs_img, latencyMs)
    print('IMG: Latency: {:.2f} ms, Average latency: {:.2f} ms, Std: {:.2f}'.format(latencyMs, np.average(diffs_img), np.std(diffs_img)))

    # get detection from queue
    inDet = qDet.get()
    latencyMs = (dai.Clock.now() - inDet.getTimestamp()).total_seconds() * 1000
    diffs_nn = np.append(diffs_nn, latencyMs)
    print('nn: Latency: {:.2f} ms, Average latency: {:.2f} ms, Std: {:.2f}'.format(latencyMs, np.average(diffs_nn), np.std(diffs_nn)))
    
    # print out all detections
    '''if inDet is not None:
        detections = [detection for detection in inDet.detections]
        for detection in detections:
            print(labelMap[detection.label], detection.confidence * 100, ' %')'''

    count += 1`

Hi @lasse
We have a guide regarding the NN low latency here. Basically what I think happens, is that the color camera has to constantly lower the FPS to match NN performance, which introduces overhead and increases overall latency.

Thanks,
Jaka