Thanks.
I understand, but once I set the Alpha scaling to zero, the depth values jumps by average more than 10 cm. If I wont use Alpha scaling, meaning I commented that line out, the depth values are more precise. However, as you also mentioned, without the alpha scaling, the depth is not perfectly aligned to RGB.

That is what I worry most about, because we must use undistortion upon our RGB, since we use the Pro Wide camera. We are doing that by your tutorial with OpenCV and its manual undistortion. We set its matrix Alpha the same way we set the Alpha for stereo alphascaling. This is done for the perfect alignment. However, the depth values after that jumps more. We dont know why.

    VojtechVolprecht
    That shouldn't be the case since values are not changed, only the image is warped... Can you provide a MRE?

    Thanks,
    Jaka

    Yes, we measured as an experiment single depth value at the center of an image. The camera was pointed towards a rectangular object. We captured the object at the start around 1m and moved step by step (around 1cm) further.

    Since I cannot upload our measured text files here, you can look at them here. There are two text files with and without alpha scaling.

    If I take only the unique captured values, then:

    • With alpha scaling: 103, 107, 112, 117, 123, 130, 137, 145, 154, 164, 176 cm
    • No alpha scaling: 113, 116, 119, 123, 126, 130, 134, 139, 143, 148, 154, 159, 165 cm

    The object, environment and lightning was the same.

    As you can see, the steps with alpha scaling are larger than without alpha scaling. I mean without, like as I described, the setAlphaScaling is commented out, meaning no alpha is set.

    The camera settings is as follows:

    • Pro Wide
    • 12MP with isp scaling 4x13
    • Depth res 800p
    • LR check on
    • 30 FPS
    • Without any filters

    Another question please. We are thinking of manual subpixeling. I would like to ask, is there a possibility how to obtain the disparity (98MB) without turning ON the subpixel? I mean, is there a way, how to do the subpixeling manually with some computation later on? I imagine to calculate it just for specific pixels. Because I read here, that the disparity output is supposed to be dumped into memory, then the software interpolation is done.

      VojtechVolprecht
      OK, this still doesn't make sense to me. I'll test on Monday; might be a FW issue.

      VojtechVolprecht I mean, is there a way, how to do the subpixeling manually with some computation later on? I imagine to calculate it just for specific pixels.

      I guess you could output a disparity map (at 95 values), then run some custom interpolation (bicubic) on that image to retrieve finer details. The depth there would be hallucinated but you would get a more smooth transitions and more disparity values.

      OR

      luxonis/depthai-experimentstree/master/gen2-stereo-on-host

      Thanks,
      Jaka

        Hi @VojtechVolprecht
        Synced with a depthai dev, apparently this is expected. Specifically, alpha scaling is taken into account before the stereo matching process. When alpha scaling is enabled—especially with higher alpha values—the effective image area used for stereo matching becomes smaller, resulting in images with fewer pixels.

        A squished image results in less pixels to work with, hence lower depth accuracy.

        Thanks,
        Jaka

        Thank you very much for clarification :-)

        If I understand it correctly, since we are using the Pro Wide camera and for the alignment we have to use the alpha scaling, it is not possible to get better depth accuracy?

        And from a different point of view. Is there any way to get full depth resolution (the 800p) and align them (like with alpha scaling) manually using post processing? I mean, if we don't set the alpha scaling and then manually adjust the depth to match the RGB?

          VojtechVolprecht

          VojtechVolprecht And from a different point of view. Is there any way to get full depth resolution (the 800p) and align them (like with alpha scaling) manually using post processing? I mean, if we don't set the alpha scaling and then manually adjust the depth to match the RGB?

          It's possible to get standard (unscaled) depth and color to host side, then apply alpha scaling after and then align the depth to RGB. This should preserve the depth accuracy.

          Thanks,
          Jaka

          jakaskerl
          I tried the main.py script from the experiments repo you provided and it seems it does the rectification but not undistortion. I guess, it needs to be done manually? However, if I look into the depthai_demo script, the rectified left/right camera are already undistorted as well.

            jakaskerl

            Another question please regarding the outputing of disparity map. Is there a way how to debugDispCostDump without turning on the subpixel? Because right now, we can only access the 96 values per pixel if the subpixel is ON.

            Thanks a lot for your support.

              VojtechVolprecht

              VojtechVolprecht I guess, it needs to be done manually?

              I guess so. The demo uses on device stereo so it performs undistortion and rectification at the same time.

              VojtechVolprecht Is there a way how to debugDispCostDump without turning on the subpixel? Because right now, we can only access the 96 values per pixel if the subpixel is ON.

              Can you elaborate on what you wish to achieve?

              Thanks,
              Jaka

                jakaskerl Can you elaborate on what you wish to achieve?

                Yes, we are thinking of "subpixeling" in specific ROI on host PC. Because we need to boost the accuracy and the subpixeling look good, but it cost a lot of resources. We still need to proceed in 30 FPS, therefore, we are thinking of subpixeling just important ROI.

                  VojtechVolprecht
                  AFAIK, cost debug output was not added in the API, likely due to the immense message size. I doubt it would be of any help anyway.

                  Thanks,
                  Jaka