Hello,

Anyone having experience with using gaze detection on a video instead of livestreaming from webcam? Specifically using the gen2-gaze-detection library

I have a new use case, to use the Oak-D to record raw video (just like a webcam), then in post-processing I need to apply gaze detection to translate fixed coordinates as Areas of Interest, eg: road, left_mirror, right_mirror, etc.

    vedoua
    You want to run inference on a pre-recorded video? You can do that by sending the frames to the device one by one.

    hostInQ = device.getInputQueue("host_in", maxSize=4, blocking=False)
    frame = cv2.resize(frame, (800, 800)) # resize your video frame
    h, w, c = frame.shape
    bgr_planar = frame.transpose(2, 0, 1).flatten()
    imgFrame = dai.ImgFrame()
    imgFrame.setType(dai.ImgFrame.Type.BGR888p)
    imgFrame.setWidth(w)
    imgFrame.setHeight(h)
    imgFrame.setData(bgr_planar)
    hostInQ.send(imgFrame)

    Thank you, will try this today!