I'm using DepthAI cameras in an application which requires a high resolution (>= 1080p) RGB video stream, but can make do with low resolution (640x400 is plenty) depth frames. Essentially, the objects we're detecting are large enough that they're guaranteed to appear clearly in low resolution depth frames. Due to cabling shenanigans, the system must work with USB2.
In principle, this should be possible. USB2 has enough bandwidth for MJPEG-compressed 1080p RGB frames and uncompressed 640x400 depth frames at 30 fps, with some bandwidth to spare. But I haven't been able to get this to work with RGB-depth alignment.
The problem I am having is that calling setDepthAlign(dai.CameraBoardSocket.RGB) seems to force the RGB and depth frames to have the same resolution. So either I am forced to use low resolution RGB frames, or I run out of USB2 bandwidth trying to stream high resolution depth frames. Furthermore, the video encoder can't compress the depth frame due to its format being RAW16. I'm using subpixel mode so the encoder can't compress the disparity frame either.
Am I missing something? Is there a way to enable RGB-depth alignment, but downscale the depth frame?