• DepthAI
  • Wrong X,Y scale calculated from depth on host

Hello,

Currently, we are creating and testing a program that calculates X and Y from the depth information with infrared pattern (1200mA) based on the RGB Depth alignment C++ program by using OAK-D-Pro.
As shown in the figure below, 100mm becomes 130mm.

How can I calculate the actual correct value?

Settings are
Width = 1280
Height = 720
fx, fy, cx, cy = 807.091,807.091,652.369,358.778
static std::atomic<bool> extended_disparity{ true };
static std::atomic<bool> subpixel{ true };
static std::atomic<bool> lr_check{ true };

The caluculation is

                float fxb = fx * baseline;
                if (subpixel) {
                    fxb *= 8;
                    for (int j = 0; j < height; j++) {
                        cv::Vec3b* rgb = frame[rgbWindowName].ptr<cv::Vec3b>(j);
                        ushort* dpth = frame[depthWindowName].ptr<ushort>(j);
                        for (int i = 0; i < width; i++) {
                            if (dpth[i] == 0) continue;
                            cv::Vec3f xyz;
                            xyz[2] = fxb / (float)dpth[i];
                            xyz[0] = ((float)i - cx) * xyz[2] / fx;
                            xyz[1] = ((float)j - cy) * xyz[2] / fy;
                            vxyz.push_back(xyz);
                            vrgb.push_back(rgb[i]);
                        }
                    }
                }
---
Thanks in advance.
  • erik replied to this.

    Hi kfuji3 ,
    Regarding the calculation; you could look at spatials-on-host demo to cross-check whether your calculations are correct. What's the distance to the ruler? What StereoDepth modes are you using?
    Thanks, Erik

      Hi erik
      Thank you so much for your quick response.

      The distance to the ruler is about 880mm.

      Stereo Depth modes are ...
      ` stereo->setDepthAlign(dai::CameraBoardSocket::RGB);

      // Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
      stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
      // Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
      //stereo->initialConfig.setMedianFilter(dai::MedianFilter::KERNEL_7x7);
      stereo->initialConfig.setMedianFilter(dai::MedianFilter::KERNEL_5x5);
      //stereo->initialConfig.setBilateralFilterSigma(7);
      stereo->setLeftRightCheck(lr_check);
      stereo->setExtendedDisparity(extended_disparity);
      stereo->setSubpixel(subpixel);
      auto config = stereo->initialConfig.get();
      config.postProcessing.speckleFilter.enable = false;
      config.postProcessing.speckleFilter.speckleRange = 50;
      config.postProcessing.temporalFilter.enable = true;
      config.postProcessing.spatialFilter.enable = true;
      config.postProcessing.spatialFilter.holeFillingRadius = 2;
      config.postProcessing.spatialFilter.numIterations = 1;
      config.postProcessing.thresholdFilter.minRange = 400;
      config.postProcessing.thresholdFilter.maxRange = 2000;
      config.postProcessing.decimationFilter.decimationFactor = 1;
      stereo->initialConfig.set(config);

      `
      The spatials-on-host demo is using the angle, so should I use the angle instead of the focal-length ?
      In calc.py ...

      def _calc_angle(self, frame, offset):
          return math.atan(math.tan(self.monoHFOV / 2.0) * offset / (frame.shape[1] / 2.0))
               :
          angle_x = self._calc_angle(depthFrame, bb_x_pos)
          angle_y = self._calc_angle(depthFrame, bb_y_pos)
          spatials = {
              'z': averageDepth,
              'x': averageDepth * math.tan(angle_x),
              'y': -averageDepth * math.tan(angle_y)
          }

      I will try to use the angle.
      Thanks in advance.

        kfuji3

        I've tried to use the angle referred to calc.py.

        My program has been changed at the below points:

        auto monoHFOVL = calibData.getFov(dai::CameraBoardSocket::LEFT) / 180.0 * 3.14159;
        auto tanFOVh = tan(monoHFOVL * 0.5);
        auto midW = width * 0.5;
        auto midH = height * 0.5;
            :
        //stereo->disparity.link(depthOut->input);
        stereo->depth.link(depthOut->input);
            :
                                xyz[2] = (float)dpth[i];
                                xyz[0] = (float)(dpth[i] * tan(atan(tanFOVh * (double)(i - midW) / midW)));
                                xyz[1] = (float)(dpth[i] * tan(atan(tanFOVh * (double)(j - midH) / midW)));

        However the error decrease, the error still remains 120mm against 100mm.

        The distance to the ruler is about 874mm.

        However the actual distance was about 840mm from the front of Camera.

        How shoud I do ?

        By the way,
        the sample program : gen2-pointcloud/rgbd-pointcloud is seems to be good.
        Should I use Open3D ?

        What's the difference the calculations based on the intrinsics or HFOV or Open3d(RGBD) ?

        Thanks in advance.

        I would actually suggest to use a small ROI (and average depth inside ROI) instead of using a single point for your measurement, as stereo depth map can be noisy, and this will help alleviate this problem. Regarding Open3D - it's mapping RGB + D into pointcloud with intrinsics, so it's the same approach.
        THanks, Erik

          Hi erik
          Thank you so much for your reply.

          I have to apologize for my mistake of using the intrinsics of Right Camera instead of the intrinsics of RGB Camera. I've found it while carefully comparing my code with the code of gen2-pointcloud/rgbd-pointcloud. So, I now get the correct X, Y value.

          dai::CalibrationHandler calibData = device.readCalibration();
          std::vector<std::vector<float>> intrinsics;
          int width = camRgb->getIspWidth(), height = camRgb->getIspHeight();
          intrinsics = calibData.getCameraIntrinsics(dai::CameraBoardSocket::RGB, width, height);

          Thank you for your continued support.