I am converting 640x400 depth frames to point clouds but have discovered that the camera principles returned by CalibrationHandler are around 640 and 400, near the lower right, rather than the center of the frame, and I have had to halve them, or pass the center as the topLeft, bottomRight parameters to getCameraIntrinsics(), in order to get the correct cartesian point cloud coordinates. What if I were to set the frame size to 1080x720. What values should I be using for the principles then? Can someone point me to some documentation that explains how this works?

  • erik replied to this.

    Hello edj ,
    We have pointcloud demo as well, see code here that retrieves intrinsics that are needed for open3d pointcloud visualization.
    Thanks, Erik

    Yes, I tried that, but was unable to obtain correct results. I have instead written my own depth to point cloud conversion, but have only been able to obtain correct values if I divide cx and cy in half. I am trying to find documentation which would help me understand the reason for this, and how to generalize it to the other capture resolutions.

    • erik replied to this.

      edj What was incorrect about these results? Did the pointcloud not look as expected?