• DepthAI
  • When and where are camera calibrations and undistortion applied

Hi,

I noticed the setCameraCalibration method available for Pipeline objects, but I am hoping for clarification on when this calibration data is used. I saw in another post that it is used by StereoDepth nodes, but was wondering if it is used anywhere else.

  1. Is rectification/undistortion applied to the isp/still/video outputs of ColorCamera nodes, and/or to the out stream of MonoCamera nodes?
  2. If I wanted to apply a different (un)distortion model, e.g. the double sphere model (https://arxiv.org/pdf/1807.08957.pdf) is there a node that would let me do that on the processing board? Or do I have to apply the relevant model to the isp/still/video stream on the host?

Thanks for your assistance!

Cheers,
Stewart Jamieson

  • erik replied to this.

    Hi WHOI-Stewart-Jamieson ,

    1. No, it's not, but we have an example of undistortion here. Note that StereoDept node does perform undistortion automatically on mono frames.
    2. I think you could perform this with the warp support added to the ImageManip.

    Thoughts?
    Thanks, Erik

    HI @erik,

    Thank you, the warp support seems perfectly suited for rectifying the RGB stream with a new camera model -- it looks like I just need to compute a mesh based on the double sphere model, and the ImageManip node will handle the rest.

    To apply the same type of calibration to the stereo images before disparity computation, it looks like I can similarly use the loadMeshData/loadMeshFiles methods of the StereoDepth node, to get a depth image is properly aligned with the rectified RGB stream?

    Cheers,
    Stewart

      Hi WHOI-Stewart-Jamieson ,
      Yep, that's should be the case! Also note for these mesh files - besides undistortion, they also need to take into the account rectification of mono cameras, as StereoDepth node does both of these (undistortion+rectification) in the same step.
      Thanks, Erik