• DepthAI-v2
  • Disparity Scaling with Subpixel mode, since v2.29

Hi Luxonis Team,

I read the release note of v2.29 mentioning the implementation of scaling to 13bit.

This change currently breaks our current way of disparity to depth conversion, using subpixel mode.

For example: a good measurement in DISPARITY8 (no post process) of 3.2 meter, now becomes 0.28 meter in DISPARITY16 (with subpixel, assuming 3bit for fractional)

May I know how should i correct this? Cant really find documentation on the details of the 13 bit, and how to convert back to good old disparity values.

Thanks!

    Huimin
    Use depth.initialConfig.getMaxDisparity() to calculate the correct conversion.

    Thanks,
    Jaka

      I am trying to interpret a bit more.

      8192 divide by maxDisp which is 95, gives 86.231578… The math round it down to 86, and times maxDisp again so it becomes 86 * 95 = 8170

      Does this mean the values we are seeing now in the 16bit data, in subpixel mode:

      • The values is multiplied by 86 internally in the VPU
      • That means, we shall first devide the 16bit data by 86, to obtain the actual disparity
      • After the division, we need to treat the last three bit as fractional

      Is my understanding correct? Or i need to take out the fractional first before scaling it down by 86?

      • Improved StereoDepth filtering and an option to use a set a custom order of filters
        • Disparity is first scaled to 13 bit before going through filtering, which results in filters being more effective.

      Basically, what is the actual scale factor in the statement above in the release code?

      Thanks!

        Huimin
        Set the filtering order:

        config.postProcessing.filteringOrder = [
          dai.RawStereoDepthConfig.PostProcessing.Filter.TEMPORAL,
          dai.RawStereoDepthConfig.PostProcessing.Filter.SPECKLE,
          dai.RawStereoDepthConfig.PostProcessing.Filter.SPATIAL,
          dai.RawStereoDepthConfig.PostProcessing.Filter.DECIMATION,
          dai.RawStereoDepthConfig.PostProcessing.Filter.MEDIAN,
        ]
        depth.initialConfig.set(config)

        According to the gh change you have sent, the max disparity is only set if median filter is not the last one.

        Thanks,
        Jaka

          jakaskerl Hi, yes this i have already understood. I am using those filters. Is the factor then 86? or some other number. I need to obtain the actual pixel disparity, so i can perform disparity to depth calculation normally on host. Thanks!

            Huimin yes, you can also enable debug logs, factor is printed (in case you are interested in implementation details), or better just rely on getMaxDisparity() and calculate based on that.

            Hi @GergelySzabolcs

            I am back with some more info.

            If I call getMaxDisparity() in C++ depthai-core API, I get 8192

            In contrast, turn on debug would give me the following

            [18443010C10149F500] [1.1] [5.327] [StereoDepth(15)] [debug] Post-processing filters: '2'
            [18443010C10149F500] [1.1] [5.327] [StereoDepth(15)] [debug] Executing median filter
            [18443010C10149F500] [1.1] [5.328] [StereoDepth(15)] [debug] Scaling disparity values by a factor of '10'
            [18443010C10149F500] [1.1] [5.330] [StereoDepth(15)] [debug] Executing decimation filter
            [18443010C10149F500] [1.1] [5.335] [StereoDepth(15)] [info] Max disparity: 760 Disparity subpixel levels: 3 Disparity integer level multiplier: 8 Scale factor: 10

            The two ways of getting the max disparity dont quite tally. The debug logs shows max disparity 760 (which is 95 * 8, same as pre v2.29). Now we have scale factor 10, so it gives 7600

            This is still less than 8192.

            Which one i should use?