Hi Devs,
I am testing disparity estimateion with another framework and for that I receive recitified images from my OAK-D Pro W camera. My own disparity calculation looks nice but when I use camera intrinsics to calculate a point cloud from that, the point cloud looks distorted. I assume this is because the rectified images are zoomed and shifted.
Stereo Depth Video (luxonis.com)
Is it possible to get the values of that zoom and shift to add to my point cloud calculation or do I need to continue with uncropped images with alpha = 1.0? But this will make it even harder for me, because I need to do the crop by myself, with taking care of alignment of the stereo pair. Is there an easy way to solve tis?
Thank you for your help