Hello everyone, I am having a problem and I would appreciate if you could help me.

I have two modules IXM378 connected to an OAK-FFC-4P module, I want to see on screen only one of the cameras at a time, therefore, when I press the space bar I make a switch to the opposite camera. The problem is that, when I make a switch it shows me 9 frames of the previous moment in which I was using the current camera, instead of showing me directly the current moment, and that there is a fluid transition between both videos. Here is a graphical example to make it clear:

What I find strange is that the buffer is configured with a queue size of 1 frame, that is, all the time the buffer is being updated with the last frame (current moment), so I don't understand where that additional 9 frames from a past moment come from.

Here is the code I am using:

import cv2	
import depthai as dai
import numpy as np

def getFrame(queue):
    # Get frame from queue
    frame = queue.get()
    # Convert frame to OpenCV format and return
    return frame.getCvFrame()


if __name__ == '__main__':

    fps = 3
    current_state = "Framecam1a"

    # Define a pipeline
    pipeline = dai.Pipeline()
    device = dai.Device()


    # DEFINE SOURCES AND OUTPUTS
    cam1a = pipeline.create(dai.node.ColorCamera)
    cam1b = pipeline.create(dai.node.ColorCamera)
    cam1a.setBoardSocket(dai.CameraBoardSocket.CAM_A)  # 4-lane MIPI  IMX378
    cam1b.setBoardSocket(dai.CameraBoardSocket.CAM_D)  # 4-lane MIPI  IMX378
    cam1a.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
    cam1b.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
    cam1a.setFps(fps)
    cam1b.setFps(fps)

    # Set output Xlink 
    outcam1a = pipeline.create(dai.node.XLinkOut)
    outcam1b = pipeline.create(dai.node.XLinkOut)
    outcam1a.setStreamName("cam1a")
    outcam1b.setStreamName("cam1b")
      
    # LINKING
    cam1a.isp.link(outcam1a.input)
    cam1b.isp.link(outcam1b.input)

    # Pipeline is defined, now we can connect to the device
    with device:
        device.startPipeline(pipeline)
        # Get output queues.
        cam1a = device.getOutputQueue(name="cam1a", maxSize=1)
        cam1b = device.getOutputQueue(name="cam1b", maxSize=1)
            
        while True:

             # Acquire frames only for the current state
            if current_state == "Framecam1a":
                Frame = getFrame(cam1a)
            else:
                Frame = getFrame(cam1b)

            # Display the current state in the window
            cv2.imshow("Frame", Frame)

            # Check for keyboard input
            key = cv2.waitKey(1) & 0xFF

            if key == ord(' '):
                n=0
                # Change the state
                if current_state == "Framecam1a":
                    current_state = "Framecam1b"
                else:
                    current_state = "Framecam1a"

            if key == ord('q'):
                break

    Hi MartnTous
    I feel like with 2x12MP the device struggles to process so many pixels, that's why the delay.
    You can try this by running the script with DEPTHAI_LEVEL=trace python3 script.py

    Thanks,
    Jaka

    Hi @jakaskerl , thanks for your help.

    It seems that it is not due to the high resolution, because even if I decrease it to 1080_P, although the effect is more difficult to appreciate, because the program runs faster, when I switch camera I still see some past frames until it jumps to the current ones.

    I also tried with the command you mention "DEPTHAI_LEVEL=trace python3 script.py", but it's still the same.

      Hi MartnTous
      Try setting the queues to blocking=False. This fixes the issue for me (visually, I didn't check the timestamps). This will discard any frame not read in time.

      Thanks,
      Jaka

      5 days later

      Hi @jakaskerl, thank yo very much!

      It works with blocking=False, I no longer see those old frames when I switch cameras.

      Although, I was trying to avoid using this because it increases a lot (about double) the delay time between capture and display. At the same time, the frame acquisition rate is quite irregular.

      I think this is because in the first case (blocking=True) there are already frames available in the buffer to be taken when requested, and in the second case there may or may not be frames available in the buffer, depending on whether or not they were read in time, right?

      Hi @MartnTous
      There is a buildup of frames on the device side when host is blocking.

      If you set the xlink inputs to non blocking and queue to 1, the frames should go through directly.

      outcam1a.input.setBlocking(False)
      outcam1b.input.setBlocking(False)
      outcam1a.input.setQueueSize(1)
      outcam1b.input.setQueueSize(1)

      Thanks,
      Jaka