• DepthAI
  • Using `setRectifyMirrorFrame` and `setDepthAlign` to get left-aligned depth

Hi,

Our downstream processing expects that depth images (e.g., coming from a StereoDepth node) are aligned with the left camera stream, but it looks like DepthAI nodes (at least StereoDepth) default to the right.

The two functions which seem to determine left vs. right alignment are setRectifyMirrorFrame and setDepthAlign. Is it sufficient to simply call setDepthAlign(dai::CameraBoardSocket::LEFT), or does aligning with the left camera stream also require changing the mirror settings?

As an aside, I'm using rosBridge, and it looks clear that I should then use a left aligned image converter and camera info for the BridgePublisher.

Cheers,
Stewart Jamieson

Somewhat related, I've also observed that setDepthAlign seems to have the unexpected side-effect of setting the output depth image size to match the target socket. If this call is followed by a call to setOutputSize, does it end up internally scaling the image twice?

E.g. suppose that with a 4K RGB config and 720p stereo, I call setDepthAlign(dai::CameraBoardSocket::RGB) , resulting in 4K stereo depth output, but then call setOutputSize(1280, 720) to get back the "original" depth computed from the 720p stereo pair. Will it internally upscale the disparity to 4K and then downscale back to 720p, or just skip scaling altogether?

    Hi WHOI-Stewart-Jamieson ,
    Setting only the setDetphAlign will work, I think the setRectifyMirrorFrame has been deprecated for a while now.
    Regarding scaling - asking firmware team now, as I'm not sure.
    Thanks, Erik

    WHOI-Stewart-Jamieson

    E.g. suppose that with a 4K RGB config and 720p stereo, I call setDepthAlign(dai::CameraBoardSocket::RGB) , resulting in 4K stereo depth output, but then call setOutputSize(1280, 720) to get back the "original" depth computed from the 720p stereo pair. Will it internally upscale the disparity to 4K and then downscale back to 720p, or just skip scaling altogether?

    It won't scale it twice. First the depth image is center aligned, with no scaling, then warped (e.g. cv2.remap) to match the RGB camera FOV, which also includes scaling to the target output resolution.

      WHOI-Stewart-Jamieson

      It won't scale it twice. First the depth image is center aligned, with no scaling, then warped (e.g. cv2.remap) to match the RGB camera FOV, which also includes scaling to the target output resolution.
      E.g. from1280x720 to width X height where width and height is by default the RGB camera resolution, unless it's explicitly specified by setOutputSize