Hello,

I am getting an image from the RGB central camera and the corresponding depth from the left/right cameras.
Using that I am calculating some distances on the RGB image (in pixels) and I would like to convert that pixel distance to actual (world) distance using the depth information.
To do that, I am trying to get the focal length of the OAK-D camera.
I have the following questions:

  1. Where can I find the focal length of the camera? I looked at all the documentation and it is not mentioned anywhere.
  2. Does the focal length change when the camera focuses with the auto-focus? If so how can I get the focal length for each frame dynamically when capturing the images?

Any help you can offer will be much appreciated.

  • erik replied to this.

    Hello i2sCV , you can get focal point from camera intrinsic, demo here - focaLen = camIntrinsics[0][0].
    For the second question; I am actually not sure, will ask the team internally about it.
    Thanks, Erik

    Dear Erik,

    Thank you very much for the reply.
    I found the focal length using the code you provided and converted the focal length given by the intrinsic matrix to the focal length in mm using the formula focal_length_mm = focaLen * (6.17mm (sensor width) / 4056 (pixel width of sensor).
    Please let me know when you have any news regarding the autofocus and focal length issue.
    Thank you very much for your help!

    • erik replied to this.

      Hello i2sCV , here's the response I got from the team:

      There is a slight FOV change (and so a focal length change) depending on the lens position, but I think we haven't characterized it yet.
      It would be useful to have, also for RGB-depth alignment, recalculating the warp transform matrix when the lens position changed.

      a year later

      Hello @erik !! Is there any internal method to aling the RGB with the depth image (array) and then crop the RGB so it displays the same elements as the depth one? I've found the method to aling them:

      stereo.setLeftRightCheck(True)  
      stereo.setDepthAlign(dai.CameraBoardSocket.RGB)

      but I cannot manage to extract exactly the depth portion although I've tried different approximations.

      I'm not an expert in photogrametry, so maybe it is much simpler than I think.

      Thanks for any help you can provide me.

      The purpose of this is to use the relative positions of different objects, detected by YOLO models in RGB images, in the depth array; but as the RGB takes a bigger FOV, the positions do not mirror between the original arrays.

        Hi solysombra
        I think there is currently no easy way to do it but to use an imageManip node to crop the rgb frame to mono size.

        Thanks,
        Jaka

          Hi jakaskerl !!

          Is it possible to center the depth img and the RGB image and the use the vision angles (both horizontal and vertical) to extract the area through trigonometry?? (I think it would be possible but graphic maths are not my strong point)

            Hi solysombra
            Could you got into more detail in what you are trying to achieve?
            My current understanding is that since the FOV of the RGB camera is bigger, some parts (outer edges) of the rgb frame have no depth information. You are trying to do https://docs.luxonis.com/projects/api/en/latest/samples/SpatialDetection/spatial_tiny_yolo/, but you are bothered by edges of the rgb.
            Is this correct?

            Thanks,
            Jaka

              jakaskerl Yes, thats exactly what I'm trying, but I need to do it outside OAK camera processors becouse my project characteristics: I'm saving both the RGB and the depth arrays inside a online server and there is where I run my whole AI framework (whole program with database conections, multiple ML and DL frameworks, plots, etc).

              We are working like this becouse we want to leave space in the camera for robot-positioning models (the camera is mounted on a auto-mobile platfform), and there is where faster inference is needed.

              Thanks for your help!!

              Jorge

              4 months later

              @i2sCV @erik where do you see the sensor width? I'm trying to compute the focal length in mm too. Currently using the oak d wide pro with the following properties. (height=2160, width=3840, hfov=1.3963297143066067, camera_intrinsics=array([2288.01269531, 2288.01269531, 1890.12475586, 1121.40661621])

              @ShannonGovekar you can check specs of the actual sensor - IMX378 has pixel size of 1.55µmx1.55µm, while OV9782 has 3µmx3µm. Multiply by number of pixels to get sensor size.