K
kfuji3

  • Apr 13, 2023
  • Joined Aug 23, 2022
  • 0 best answers
  • Hi erik
    Thank you so much for your reply.

    I have to apologize for my mistake of using the intrinsics of Right Camera instead of the intrinsics of RGB Camera. I've found it while carefully comparing my code with the code of gen2-pointcloud/rgbd-pointcloud. So, I now get the correct X, Y value.

    dai::CalibrationHandler calibData = device.readCalibration();
    std::vector<std::vector<float>> intrinsics;
    int width = camRgb->getIspWidth(), height = camRgb->getIspHeight();
    intrinsics = calibData.getCameraIntrinsics(dai::CameraBoardSocket::RGB, width, height);

    Thank you for your continued support.

  • kfuji3

    I've tried to use the angle referred to calc.py.

    My program has been changed at the below points:

    auto monoHFOVL = calibData.getFov(dai::CameraBoardSocket::LEFT) / 180.0 * 3.14159;
    auto tanFOVh = tan(monoHFOVL * 0.5);
    auto midW = width * 0.5;
    auto midH = height * 0.5;
        :
    //stereo->disparity.link(depthOut->input);
    stereo->depth.link(depthOut->input);
        :
                            xyz[2] = (float)dpth[i];
                            xyz[0] = (float)(dpth[i] * tan(atan(tanFOVh * (double)(i - midW) / midW)));
                            xyz[1] = (float)(dpth[i] * tan(atan(tanFOVh * (double)(j - midH) / midW)));

    However the error decrease, the error still remains 120mm against 100mm.

    The distance to the ruler is about 874mm.

    However the actual distance was about 840mm from the front of Camera.

    How shoud I do ?

    By the way,
    the sample program : gen2-pointcloud/rgbd-pointcloud is seems to be good.
    Should I use Open3D ?

    What's the difference the calculations based on the intrinsics or HFOV or Open3d(RGBD) ?

    Thanks in advance.

  • Hi erik
    Thank you so much for your quick response.

    The distance to the ruler is about 880mm.

    Stereo Depth modes are ...
    ` stereo->setDepthAlign(dai::CameraBoardSocket::RGB);

    // Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
    stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
    // Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
    //stereo->initialConfig.setMedianFilter(dai::MedianFilter::KERNEL_7x7);
    stereo->initialConfig.setMedianFilter(dai::MedianFilter::KERNEL_5x5);
    //stereo->initialConfig.setBilateralFilterSigma(7);
    stereo->setLeftRightCheck(lr_check);
    stereo->setExtendedDisparity(extended_disparity);
    stereo->setSubpixel(subpixel);
    auto config = stereo->initialConfig.get();
    config.postProcessing.speckleFilter.enable = false;
    config.postProcessing.speckleFilter.speckleRange = 50;
    config.postProcessing.temporalFilter.enable = true;
    config.postProcessing.spatialFilter.enable = true;
    config.postProcessing.spatialFilter.holeFillingRadius = 2;
    config.postProcessing.spatialFilter.numIterations = 1;
    config.postProcessing.thresholdFilter.minRange = 400;
    config.postProcessing.thresholdFilter.maxRange = 2000;
    config.postProcessing.decimationFilter.decimationFactor = 1;
    stereo->initialConfig.set(config);

    `
    The spatials-on-host demo is using the angle, so should I use the angle instead of the focal-length ?
    In calc.py ...

    def _calc_angle(self, frame, offset):
        return math.atan(math.tan(self.monoHFOV / 2.0) * offset / (frame.shape[1] / 2.0))
             :
        angle_x = self._calc_angle(depthFrame, bb_x_pos)
        angle_y = self._calc_angle(depthFrame, bb_y_pos)
        spatials = {
            'z': averageDepth,
            'x': averageDepth * math.tan(angle_x),
            'y': -averageDepth * math.tan(angle_y)
        }

    I will try to use the angle.
    Thanks in advance.

    • Hello,

      Currently, we are creating and testing a program that calculates X and Y from the depth information with infrared pattern (1200mA) based on the RGB Depth alignment C++ program by using OAK-D-Pro.
      As shown in the figure below, 100mm becomes 130mm.

      How can I calculate the actual correct value?

      Settings are
      Width = 1280
      Height = 720
      fx, fy, cx, cy = 807.091,807.091,652.369,358.778
      static std::atomic<bool> extended_disparity{ true };
      static std::atomic<bool> subpixel{ true };
      static std::atomic<bool> lr_check{ true };

      The caluculation is

                      float fxb = fx * baseline;
                      if (subpixel) {
                          fxb *= 8;
                          for (int j = 0; j < height; j++) {
                              cv::Vec3b* rgb = frame[rgbWindowName].ptr<cv::Vec3b>(j);
                              ushort* dpth = frame[depthWindowName].ptr<ushort>(j);
                              for (int i = 0; i < width; i++) {
                                  if (dpth[i] == 0) continue;
                                  cv::Vec3f xyz;
                                  xyz[2] = fxb / (float)dpth[i];
                                  xyz[0] = ((float)i - cx) * xyz[2] / fx;
                                  xyz[1] = ((float)j - cy) * xyz[2] / fy;
                                  vxyz.push_back(xyz);
                                  vrgb.push_back(rgb[i]);
                              }
                          }
                      }
      ---
      Thanks in advance.
      • erik replied to this.
      • themarpe

        Thank you for your quick reply.

        After I removed "libnop-master/include" from include directory in VS2022,
        the compile errors were gone.

        Thanks again, themarpe !

      • kfuji3

        As additional informations,
        I installed
        <package id="nlohmann.json" version="3.11.2" targetFramework="native" />
        from nuget.org.

        And added include directory in VS2022:
        libnop-master\include;
        depthai-core-v2.17.3-win64\include;
        depthai-core-v2.17.3-win64\include\depthai-shared\3rdparty;
        depthai-core-v2.17.3-win64\lib\cmake\depthai\dependencies\include;
        OpenCV\build\include

        That's all.

        • Hello !
          I'm trying to build depthai-core-example-main by using depthai-core-v2.17.3-win64 on VS2022.
          However, it has compiler error C2838 'Empty' : illegal qualified name in member declaration on line 65 and 79 in depthai-core-v2.17.3-win64\include\depthai-shared\common\optional.hpp.
          Somebody please tell me How should I do.
          Thanks in advance.