Hi VojtechVolprecht

VojtechVolprecht That is probably due to the fact, that it is outputed in millimeters, right?

UINT16 offers higher percision, which UINT8 can't give. If disparity is 10 and depth is 10450mm, this won't be covered by UINT8.

VojtechVolprecht Moreover, I have noticed that the depth to objects often jumps, e.g. by 5 cm or more. I also tried to turn on the Subpixeling and it immediately was more precise, like 1 cm.

So, my question is, what is the cause?

There are 95 disparity values by default, so one step could result in a large jump in actual depth - see chart here
When enabling subpixel, this disparity range increases - docs so this effect is minimized.

VojtechVolprecht However, without subpixel the depth is not so accurate, it jumps by many centimeters.

You can do a disparity shift if your depth is constantly at some range to increase the disparity there.

Thansk
Jaka

Hi @jakaskerl
Thanks a lot.

So if we focus on depth in range 1.5m to 3.5m, can we somehow increase the precision here? Or how to calculate the shift?

    VojtechVolprecht
    You can increase the precision for values lower than current minZ... docs. So for 1.5 to 3.5, the precision is still the same and comes from basic trigonometry of perspective. Only way to increase it there is subpixel unfortunately.

    Thanks,
    Jaka

    Since the subpixel is a quite huge operation which costs a lot of resources. Can we somehow do it in chosen ROI? I can imagine, that it would lower the demand if we say, that subpixeling iterests us in 50x50 window?

      VojtechVolprecht
      No, that is currently not possible. If you have scarce resource, consider using a smaller left and right image (like a 300x300 crop), and turn on the subpixel.

      Thanks,
      Jaka

      Thanks for the info :-)

      Hopefully last question 😃 We use the Pro Wide camera so in order to align the RGB and Depth, we had to use at first the manual undistortion via OpenCV: https://docs.luxonis.com/software/perception/rgb-d. We came across problematic behaviour. There is some discrepancy between measured depth values between this setting and setting when Alpha scaling is not set.

      If I understand it correctly, we ought to use the Alpha scaling, since we want 100% align with undistorted RGB and its Depth. However, if we just undistort and not scale via Alpha, the measured distances jump around our needed 4-5cm and not 7-10cm like with the set scaling. Is the scaling set by some default value?

      Below is the code we used, plus without scaling, we just commented out the setAlphaScaling().

      alpha = 0
      stereo.setAlphaScaling(alpha)
      
      rgb_w = camRgb.getResolutionWidth()
      rgb_h = camRgb.getResolutionHeight()
      rgbIntrinsics = np.array(calibData.getCameraIntrinsics(rgbCamSocket, rgb_w, rgb_h))
      rgb_d = np.array(calibData.getDistortionCoefficients(rgbCamSocket))
      rgb_new_cam_matrix, _ = cv2.getOptimalNewCameraMatrix(rgbIntrinsics, rgb_d, (rgb_w, rgb_h), alpha)
      map_x, map_y = cv2.initUndistortRectifyMap(rgbIntrinsics, rgb_d, None, rgb_new_cam_matrix, (rgb_w, rgb_h), cv2.CV_32FC1)
      
      frameRgb = cv2.remap(frameRgb, map_x, map_y, cv2.INTER_LINEAR)

        VojtechVolprecht
        Alpha for depthai is calculated based on left frame intrinsics so it is not a "default" value. In order to align the images, the alpha must be the same in both cases.

        Thanks
        Jaka

        Thanks.
        I understand, but once I set the Alpha scaling to zero, the depth values jumps by average more than 10 cm. If I wont use Alpha scaling, meaning I commented that line out, the depth values are more precise. However, as you also mentioned, without the alpha scaling, the depth is not perfectly aligned to RGB.

        That is what I worry most about, because we must use undistortion upon our RGB, since we use the Pro Wide camera. We are doing that by your tutorial with OpenCV and its manual undistortion. We set its matrix Alpha the same way we set the Alpha for stereo alphascaling. This is done for the perfect alignment. However, the depth values after that jumps more. We dont know why.

          VojtechVolprecht
          That shouldn't be the case since values are not changed, only the image is warped... Can you provide a MRE?

          Thanks,
          Jaka

          Yes, we measured as an experiment single depth value at the center of an image. The camera was pointed towards a rectangular object. We captured the object at the start around 1m and moved step by step (around 1cm) further.

          Since I cannot upload our measured text files here, you can look at them here. There are two text files with and without alpha scaling.

          If I take only the unique captured values, then:

          • With alpha scaling: 103, 107, 112, 117, 123, 130, 137, 145, 154, 164, 176 cm
          • No alpha scaling: 113, 116, 119, 123, 126, 130, 134, 139, 143, 148, 154, 159, 165 cm

          The object, environment and lightning was the same.

          As you can see, the steps with alpha scaling are larger than without alpha scaling. I mean without, like as I described, the setAlphaScaling is commented out, meaning no alpha is set.

          The camera settings is as follows:

          • Pro Wide
          • 12MP with isp scaling 4x13
          • Depth res 800p
          • LR check on
          • 30 FPS
          • Without any filters

          Another question please. We are thinking of manual subpixeling. I would like to ask, is there a possibility how to obtain the disparity (98MB) without turning ON the subpixel? I mean, is there a way, how to do the subpixeling manually with some computation later on? I imagine to calculate it just for specific pixels. Because I read here, that the disparity output is supposed to be dumped into memory, then the software interpolation is done.

            VojtechVolprecht
            OK, this still doesn't make sense to me. I'll test on Monday; might be a FW issue.

            VojtechVolprecht I mean, is there a way, how to do the subpixeling manually with some computation later on? I imagine to calculate it just for specific pixels.

            I guess you could output a disparity map (at 95 values), then run some custom interpolation (bicubic) on that image to retrieve finer details. The depth there would be hallucinated but you would get a more smooth transitions and more disparity values.

            OR

            luxonis/depthai-experimentstree/master/gen2-stereo-on-host

            Thanks,
            Jaka

              Hi @VojtechVolprecht
              Synced with a depthai dev, apparently this is expected. Specifically, alpha scaling is taken into account before the stereo matching process. When alpha scaling is enabled—especially with higher alpha values—the effective image area used for stereo matching becomes smaller, resulting in images with fewer pixels.

              A squished image results in less pixels to work with, hence lower depth accuracy.

              Thanks,
              Jaka

              Thank you very much for clarification :-)

              If I understand it correctly, since we are using the Pro Wide camera and for the alignment we have to use the alpha scaling, it is not possible to get better depth accuracy?

              And from a different point of view. Is there any way to get full depth resolution (the 800p) and align them (like with alpha scaling) manually using post processing? I mean, if we don't set the alpha scaling and then manually adjust the depth to match the RGB?

                VojtechVolprecht

                VojtechVolprecht And from a different point of view. Is there any way to get full depth resolution (the 800p) and align them (like with alpha scaling) manually using post processing? I mean, if we don't set the alpha scaling and then manually adjust the depth to match the RGB?

                It's possible to get standard (unscaled) depth and color to host side, then apply alpha scaling after and then align the depth to RGB. This should preserve the depth accuracy.

                Thanks,
                Jaka

                jakaskerl
                I tried the main.py script from the experiments repo you provided and it seems it does the rectification but not undistortion. I guess, it needs to be done manually? However, if I look into the depthai_demo script, the rectified left/right camera are already undistorted as well.

                  jakaskerl

                  Another question please regarding the outputing of disparity map. Is there a way how to debugDispCostDump without turning on the subpixel? Because right now, we can only access the 96 values per pixel if the subpixel is ON.

                  Thanks a lot for your support.

                    VojtechVolprecht

                    VojtechVolprecht I guess, it needs to be done manually?

                    I guess so. The demo uses on device stereo so it performs undistortion and rectification at the same time.

                    VojtechVolprecht Is there a way how to debugDispCostDump without turning on the subpixel? Because right now, we can only access the 96 values per pixel if the subpixel is ON.

                    Can you elaborate on what you wish to achieve?

                    Thanks,
                    Jaka

                      jakaskerl Can you elaborate on what you wish to achieve?

                      Yes, we are thinking of "subpixeling" in specific ROI on host PC. Because we need to boost the accuracy and the subpixeling look good, but it cost a lot of resources. We still need to proceed in 30 FPS, therefore, we are thinking of subpixeling just important ROI.