Hi Devs,

I am testing disparity estimateion with another framework and for that I receive recitified images from my OAK-D Pro W camera. My own disparity calculation looks nice but when I use camera intrinsics to calculate a point cloud from that, the point cloud looks distorted. I assume this is because the rectified images are zoomed and shifted.

Stereo Depth Video (luxonis.com)

Is it possible to get the values of that zoom and shift to add to my point cloud calculation or do I need to continue with uncropped images with alpha = 1.0? But this will make it even harder for me, because I need to do the crop by myself, with taking care of alignment of the stereo pair. Is there an easy way to solve tis?

Thank you for your help

  • Thanks, that was the trick. I had to read calibration with image size and also disable all the filters. Now I can reproduce the point cloud:

    calibrationData.getCameraIntrinsics(dai::CameraBoardSocket::CAM_B, std::tuple<int , int>(1280, 720));

Hi @rbn23 ,
Have you tried using the depthai's Pointcloud node instead of calculating the pointcloud yourself (on host)? I think depthai tracks intrinsics/transformations, so I'd hope pointcloud you'd receive from that node would be correct.
BR, Erik

Yes of course, but what if I estimate the disparity image myself? Can I feed that into the point cloud node?

I also saw that, when I crop the image mysef, I can give that information into the "getCameraIntrinsics" and I will receive the intrinsics for the cropped image. Is that correct?

-> topLeftPixelId and bottomRightPixelId is what i need

std::vector<std::vector<float>> getCameraIntrinsics(CameraBoardSocket cameraId,int resizeWidth = -1, int resizeHeight = -1,Point2f topLeftPixelId = Point2f(),Point2f bottomRightPixelId = Point2f(), bool keepAspectRatio = true) const;

Hi @rbn23 if you are going out of the depthai ecosystem (estimate disparity image on the host) then you wouldn't be able to leverage depthai-related features, such as pointcloud calculation, and you'd have to do everything yourself.

I believe that would work, yes.

ok, thanks. I will try like that

I need to ask again:

As I know: depth = baseline * focalLength / disparity

My camera intrinsics say fx = 566.391, baseline = 75mm

When I apply that on a point with disparity = 16: depth = 2654.95mm

But when I read the depth image from depthai-pipeline, depth for exacly that point is 2681mm

That means the depthai-pipeline uses fx = 2681*16/75 = 571.947

When I try another point I get e.g. fx = 933*47/75 = 584.68

That means in the pipeline happen some more steps. Is this open source and I can see whats happening there?

    rbn23

    • The focal length fx is resolution dependent. You need to specify the dimensions of the image.
    • Baseline is read from the calibration and is not necessarily 7.5 -> in my case it was calibrated to 7.45.

    Thanks,
    Jaka

    Thanks, that was the trick. I had to read calibration with image size and also disable all the filters. Now I can reproduce the point cloud:

    calibrationData.getCameraIntrinsics(dai::CameraBoardSocket::CAM_B, std::tuple<int , int>(1280, 720));