Hello,

I am using an external IMU to use its pitch angle and display it in the same frame along with the NN detection result by using OAK-D camera. I am trying to understand the latency values between camera-NN-host-IMU. I would like to make sure that the latency calculations I've done here are accurate. Is there more accurate way to implement these? Here is the main() functionality I am using:

`
def main():


global imu_thread_running, imu_timestamp
pitch_angle=0.0 # just to initialize

pipeline, labelMap, SHAPE = cameraSetupModel(MODEL, SYNCNN)
outVideo = videoWrite(shape=SHAPE)

imu_thread = threading.Thread(target=read_pitch_angle_thread, daemon=True)
imu_thread.start()

with depthai.Device(pipeline) as device:
    q_rgb = device.getOutputQueue("rgb", maxSize=6, blocking=False) # Also default values (maxSize=30, blocking=True), better for H265 for no artifacts
    q_nn = device.getOutputQueue("nn", maxSize=6, blocking=False)
    fps = FPSHandler()
    frame = None
    detections = []
    shape = (3, SHAPE, SHAPE)
    
    while True:
        firstTime = time.time()
        if SYNCNN:
            inRgb = q_rgb.get()
            inDet = q_nn.get()
        else:
            inRgb = q_rgb.tryGet()
            inDet = q_nn.tryGet()

        #since image was captured until the frame was received on the host computer.
        latencyMs = (depthai.Clock.now() - inRgb.getTimestamp()).total_seconds() * 1000 
        print(f"[LATENCY] IMAGE CAPTURING: {latencyMs:.4f} ms")
        camera_timestamp = time.time()  # Timestamp for camera frame capture

        latencyMs2 = (depthai.Clock.now() - inDet.getTimestamp()).total_seconds() * 1000 
        print(f"[LATENCY] DETECTION PROCESS: {latencyMs2:.4f} ms")
        nn_timestamp = time.time()  # Timestamp for NN processing results

        if inRgb is not None:
            frame = inRgb.getCvFrame()
            latencyGetFrame = (depthai.Clock.now()-inRgb.getTimestamp()).total_seconds() * 1000 
            frameTime = time.time()  # Timestamp for frame retrieval
            print(f"[LATENCY] GET CV FRAME LATENCY: {latencyGetFrame:.4f} ms")
        if inDet is not None:
            detections = inDet.detections
            fps.next_iter()
            cv2.putText(frame, "Fps: {:.2f}".format(fps.fps()), (2, SHAPE - 4), cv2.FONT_HERSHEY_TRIPLEX, 1, color=(0, 255, 0))
        
        if frame is not None:
            # Read pitch angle from serial port
            with pitch_lock:
                pitch_angle = latest_pitch_angle
                imu_time = imu_timestamp  # Get the timestamp for the latest IMU data
                latencyIMUFrame = (depthai.Clock.now()-inRgb.getTimestamp()).total_seconds() * 1000 
                print(f"[LATENCY] IMU-FRAME: {latencyIMUFrame:.4f} ms")
                print(f"[LATENCY CLASSIC] IMU-FRAME: {(time.time()-imu_time)*1000:.4f} ms")
            host_timestamp = inRgb.getTimestamp() # Timestamp for host processing
            displayFrame("Frame", frame, detections=detections,pitch_angle=pitch_angle,labelMap=labelMap)

            # Write frame to the output video
            outVideo.write(frame)
            # breakpoint()
            latencyHost = (depthai.Clock.now()-host_timestamp).total_seconds() * 1000
            print(f"[LATENCY] HOST DELAY: {latencyHost:.4f} ms")

        print(f"[LATENCY CLASSIC] ALL LOOP DELAY: {(time.time()-firstTime)*1000:.4f} ms")
        print("=====================================")

        if cv2.waitKey(1) == ord('q'):
            break

imu_thread_running = False
ser.close()
# Release the VideoWriter object and close all OpenCV windows
outVideo.release()
cv2.destroyAllWindows()

if name == "main":
main()
`

Thank you in advance for your help!

Bests,
Cem

  • Hi @Uce
    frame_TS = inRgb.getTimestamp() will give you time at which the frame was captured (end of exposure I think). Same thing for detection node, since you are passing rgb frames to it (it will inherit the timestamp).

    use dai.Clock.now() for current time.

    You can also use environment variable DEPTHAI_LEVEL=trace to check how much time each processing step took.

    More examples: https://docs.luxonis.com/software/depthai/examples/latency_measurement/

    Thanks,
    Jaka

Hi @Uce
frame_TS = inRgb.getTimestamp() will give you time at which the frame was captured (end of exposure I think). Same thing for detection node, since you are passing rgb frames to it (it will inherit the timestamp).

use dai.Clock.now() for current time.

You can also use environment variable DEPTHAI_LEVEL=trace to check how much time each processing step took.

More examples: https://docs.luxonis.com/software/depthai/examples/latency_measurement/

Thanks,
Jaka

Hi @jakaskerl

Thank you for your response. I have edited the code to use dai.Clock.now() and inRgb,getTimestamp(). Actually my next question is how to integrate external IMU data into depthai.pipeline() structure with multithreading. I added multithreading along with NN and RGB camera, it works ok for now but it would be good to implement multihreading approach for external IMU data readings within depthai pipeline, like in a Script Node.

OR do you think that an external IMU is not needed to be processed with multithreading? Any ideas?

Than you in advance,
Cem

    Hi Uce
    There would be no use in sending the IMU data to the device. Best to use the multithreading approach along with dai.Clock.now() and sync them together. Sending IMU data to the device will introduce additional latency.

    Thoughts?

    • Uce replied to this.

      Hi jakaskerl

      Thank you for your reply. I have some thoughts about Synchronization.

      • To sync frames-NN-IMU(external) altogether along with multithreading, can you elaborate what approach would be the best to go?

      • As it is shown here, there are sequence number sync and timestamp sync. Since the external IMU is operating at much higher frequency than frame-NN, I assume I should go with timestamp sync.

      • Also, should I benefit from Sync Nodes ? I assume I can only benefit from it for frame-NN sync, not for IMU sync.

      • How to use dai.Clock.now() along with time.time()? Since external IMU is not in the depthai pipeline but in a multihreading process, I guess I might need to do sync by using time.time() functionality. Any elaboration on that?

      Thanks in advance for your opinions.
      Bests,
      Cem

      Hi @Uce

      • Run the IMU in a separate thread as not to block the current if that is the issue when trying to grab info from the IMU. If it doesn't block the pipeline, then that is not needed.
      • Definitely use timestamps.
      • You can use sync node, but you will have to send the IMU to the device (make an IMU message type, and manually give it a timestamp). This introduces latency since it takes some time to send the IMU message to the device. Alternatively you can employ syncing on host: luxonis/depthai-experimentsblob/master/gen2-syncing/host-frame-sync.py
      • Don't use time.time since the two don't match. dai.Clock is monotonic clock that returns timedelta from PC's last boot up. You can use it instead of the time and compare it to getTimestamp() to sync messages.

      Thanks,
      Jaka

      • Uce replied to this.
      • Uce likes this.