Hi there,

I have 2x IMX577 cameras on a OAK-FFC 4P board, and I'm running the new depthai v3 sdk.

I want to run both cameras at max resolution (12MP), but I get:

Camera(1) - Out of memory while creating pool for source frames. Number of frames: 3 each with size: 15412800B

and this does not happen when I run a single camera.

My application requires to have both cameras at 12MP because during stream (low-res) I need to acquire high resolution snapshots (doing that with the Script method).
So I guess a possible solution would be to reduce the camera pool size, but I could not find how I can achieve that. Any suggestion?

For reference, here is how the pipeline looks like:

oak_pipeline_v3 = dai.Pipeline()

cam1 = oak_pipeline_v3.create(dai.node.Camera).build(
    dai.CameraBoardSocket.CAM_A,
    sensorFps=30
)
cam2 = oak_pipeline_v3.create(dai.node.Camera).build(
    dai.CameraBoardSocket.CAM_B,
    sensorFps=30
)
stream_highest_res1 = cam1.requestFullResolutionOutput(useHighestResolution=True)
stream_highest_res2 = cam2.requestFullResolutionOutput(useHighestResolution=True)

imgManip1 = oak_pipeline_v3.create(dai.node.ImageManip)
stream_highest_res1.link(imgManip1.inputImage)
imgManip1.setNumFramesPool(2)
imgManip1.initialConfig.setOutputSize(x, y)
imgManip1.initialConfig.setFrameType(dai.ImgFrame.Type.NV12)
imgManip1.setMaxOutputFrameSize(x * y * 3)
downscaled_res_q1 = imgManip1.out.createOutputQueue(1)

imgManip2 = oak_pipeline_v3.create(dai.node.ImageManip)
stream_highest_res2.link(imgManip2.inputImage)
imgManip2.setNumFramesPool(2)
imgManip2.initialConfig.setOutputSize(x, y)
imgManip2.initialConfig.setFrameType(dai.ImgFrame.Type.NV12)
imgManip2.setMaxOutputFrameSize(x * y * 3)
downscaled_res_q2 = imgManip2.out.createOutputQueue(1)

oak_pipeline_v3.start()

    ava
    You can still use the ColorCamera node in the V3. It allows you to use the setNumFramesPool option.

    Thanks
    Jaka

    Hi there, thank you for the suggestion - indeed I could get it to work with ColorCamera.

    Now I'm hitting another bottleneck. When I have 2 12MP cameras, I have a noticeable drop in FPS and increase in latency. I'm wondering where this comes from and whether it's something that can be fixed?

    Some benchmarks here:

    1 camera -> 30fps measured, 57ms latency
    2 cameras -> 22fps measured, 69ms latency

    And here is the pipeline:

    import depthai as dai
    FPS = 30
    device = dai.Device()
    with dai.Pipeline(device) as pipeline:
        cam1 = pipeline.create(dai.node.ColorCamera)
        cam1.setBoardSocket(dai.CameraBoardSocket.CAM_A)
        cam1.setFps(FPS)
        cam1.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
        cam1.setNumFramesPool(2, 2, 2, 2, 2)
        stream_highest_res1 = cam1.isp
    
        imgManip1 = pipeline.create(dai.node.ImageManip)
        stream_highest_res1.link(imgManip1.inputImage)
        imgManip1.initialConfig.setOutputSize(4048//4, 3040//4)
        imgManip1.setMaxOutputFrameSize(4048//4*3040//4*3)
        imgManip1.setNumFramesPool(2)
    
        cam2 = pipeline.create(dai.node.ColorCamera)
        cam2.setBoardSocket(dai.CameraBoardSocket.CAM_D)
        cam2.setFps(FPS)
        cam2.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
        cam2.setNumFramesPool(2, 2, 2, 2, 2)
        stream_highest_res2 = cam2.isp
        
        imgManip2 = pipeline.create(dai.node.ImageManip)
        stream_highest_res2.link(imgManip2.inputImage)
        imgManip2.initialConfig.setOutputSize(4048//4, 3040//4)
        imgManip2.setMaxOutputFrameSize(4048//4*3040//4*3)
        imgManip2.setNumFramesPool(2)
        
        # Create BenchmarkIn1 node
        benchmarkIn1 = pipeline.create(dai.node.BenchmarkIn)
        benchmarkIn1.sendReportEveryNMessages(100)
        imgManip1.out.link(benchmarkIn1.input)
        outputQueue1 = benchmarkIn1.report.createOutputQueue()
    
        # Create BenchmarkIn2 node
        benchmarkIn2 = pipeline.create(dai.node.BenchmarkIn)
        benchmarkIn2.sendReportEveryNMessages(100)
        imgManip2.out.link(benchmarkIn2.input)
        outputQueue2 = benchmarkIn2.report.createOutputQueue()
    
        # Connect to device and start pipeline
        ctrl = dai.CameraControl()
        pipeline.start()
        print("To capture an image, press 'c'")
        while pipeline.isRunning():
            benchmarkReport1 = outputQueue1.get()
            benchmarkReport2 = outputQueue2.get()

    Maybe it's also useful if I describe what the usecase actually is, in case there is another way to do it. I need to stream one camera at the time at 30fps at ~HD quality. Then I need to be able to quickly change streaming source to the other camera. On top of that I need to be able to retrieve a snapshot at 12MP from the currently active camera.

    Given the requirements I have, I would not necessarily need to stream from both cameras at the same time, nor stream 12MP constantly - but I could find no other way to do it. Any suggestion here?

      ava
      If you need 12MP images on demand, then you have to set up the sensors with 12MP resolution.
      ColorCamera has a still output, which you can trigger by sending a ctrl.setStillCapture(True) message to the node during runtime. Otherwise, best to output the stream using .video and set the size to HD.
      Also, you likely don't need the frames pools for preview, so you can do (2, 2, 0, 2, 1)

      Thanks,
      Jaka