Yes of course, but what if I estimate the disparity image myself? Can I feed that into the point cloud node?
I also saw that, when I crop the image mysef, I can give that information into the "getCameraIntrinsics" and I will receive the intrinsics for the cropped image. Is that correct?
-> topLeftPixelId and bottomRightPixelId is what i need
std::vector<std::vector<float>> getCameraIntrinsics(CameraBoardSocket cameraId,int resizeWidth = -1, int resizeHeight = -1,Point2f topLeftPixelId = Point2f(),Point2f bottomRightPixelId = Point2f(), bool keepAspectRatio = true) const;