C
chrisc

  • May 25, 2022
  • Joined May 24, 2022
  • 0 best answers
  • erik Thanks for pointing me in the right direction! I tried the branch and was able to create an ImageManip node to resize the depth frame down to something that will fit within USB2 bandwidth. Now I can have all three of the things in the thread title!

    I also just read that StereoDepth nodes have a setOutputSize method that resizes the depth frame, but only when doing RGB-depth alignment. It almost seems to have been made specifically for my kind of use case, though I missed it at first. You can even use it with the main branch.

  • Hi,

    I'm using DepthAI cameras in an application which requires a high resolution (>= 1080p) RGB video stream, but can make do with low resolution (640x400 is plenty) depth frames. Essentially, the objects we're detecting are large enough that they're guaranteed to appear clearly in low resolution depth frames. Due to cabling shenanigans, the system must work with USB2.

    In principle, this should be possible. USB2 has enough bandwidth for MJPEG-compressed 1080p RGB frames and uncompressed 640x400 depth frames at 30 fps, with some bandwidth to spare. But I haven't been able to get this to work with RGB-depth alignment.

    The problem I am having is that calling setDepthAlign(dai.CameraBoardSocket.RGB) seems to force the RGB and depth frames to have the same resolution. So either I am forced to use low resolution RGB frames, or I run out of USB2 bandwidth trying to stream high resolution depth frames. Furthermore, the video encoder can't compress the depth frame due to its format being RAW16. I'm using subpixel mode so the encoder can't compress the disparity frame either.

    Am I missing something? Is there a way to enable RGB-depth alignment, but downscale the depth frame?

    • erik replied to this.