Hi,

We are using OAK-D for autonomous driving project. and we have some issues when we are verifying its functions of OAK-D.
Below is the issues what we met so far.

  1. Diagonal distance error (White Line OAK-D / Red Line 2D Lidar in the picture below)
    ==> It is one of the critical issue we met.

  2. Front wall distance error
    => back-and-forth shaking error

  3. Distance measurement error
    ==> Error in 1 meter standard
    Actual distance | Measurement result | Error
    0.5m | 0.4m | -0.1m
    1.0m | 1.0m | 0.0m
    1.5m | 1.6m | +0.1m
    2.0m | 2.2m | +0.2m

  4. Severe border(edge) noise

  5. Optimal code required for ROS2
    ==> we are using below three repository for our project now. Could you recommend us for our project. Below two is the function we are using.
    first: Convert to 2D Depth and use
    second: Only depth data required. so please share us how to use depth info without its AI function.

<repository we are using now>
https://github.com/luxonis/depthai-core
https://github.com/luxonis/depthai-ros
https://github.com/luxonis/depthai-ros-examples

Best regards,
Ryan.

  • erik replied to this.

    Hello RyanLee ,
    1) I am not entirely sure what I am looking at in this image. Is any higher resolution image available? Could you elaborate on the diagonal distance error - how is it represented in the image?
    2) Do you mean blurry color images due to the shaking/vibration?
    3) Could you provide some images of depth here and depth bounding box?
    4) Is this noise at the edge of the depth frame, or edge of an object? Could you provide any higher-res images?
    5) Asking the team internally.
    Thanks, Erik

    Hello,
    on 2. If you are referring to points fluctuating slightly. We can try enabling speckle filter, temporal filter. (Note: There will still be small amount of points still going front and back because sometimes stereo system can think the best disparity match might be in the next pixel on the epipolar line.

    1. Stereo_inertial_node doesn't contain any of the AI processing running. It runs depth and IMU only. I think that might be best for you. Here is the link to the example. There is a PR in the process to the example which adds additional post processing for depth in this.
      Let me know if this helps or if you need any additional help.

    RyanLee Regarding point 3, could you also try with setting stereoDepthNode.setFocalLengthFromCalibration(True)? DepthAI versions 2.15.1 and later will have this set to True by default.
    Thanks, Erik

    Hi,

    I really appreciate you guys help. Thank you so much. and I am sorry for not giving you enough information for it in the previous post. So i added some more information and answer according to your feedback. And also will be tested based on your some guide as soon as possible and come back to you.

    Here is updated answers.

    Issue Num. 1:
    In the picture below the white Line is from OAK-D and the Red Line is from 2D Lidar. The information from OAK-D is not stable compare to 2D Lidar in Diagonal depth point of view. There is any way to enhance this situation? I added higher resolution image again. Please check it out and let us know if you are not clear with anything.

    Issue Num. 2
    I added video for this test. Again, the white dot is from OAK-D and red dot is from 2D Lidar. it is not stable comparing to 2D Lidar.

    https://drive.google.com/file/d/1PF3f4WhWOsTVpB2rdhnbwjRe0akYafQp/view?usp=sharing

    @luxonis-Sachin Could you give me some more information about how to enable the speckle filter and temporal filter you mentioned about.

    Issue Num. 3
    Our customer said they do not use AI. They only use depth information for their application.

    Issue Num. 4
    The noise is at the edge of the depth frame. I added video for it below.

    https://drive.google.com/file/d/1zoIkkPxQi1qDTmO_s_UqxrqAulKw0ofG/view?usp=sharing

    Issue Num. 5
    We will try with Stereo_inertial_node and then come back to you with the test results again.

    • erik replied to this.

      Hello RyanLee,
      regarding points 1/2/4, the depth accuracy/smoothness depends on a number of factors:

      • calibration (if factory calibrated, it should be optimal)
      • since you are using OAK-D (non-Pro), texture and lighting play a huge role in depth accuracy, see docs here (Pro docs here)
      • post-processing filters, docs here (Depth Filters)

      That said, stereo depth accuracy can't compare to TOF solutions for applications like depth scanning of (close-by) walls, where high-res isn't needed. (raw output of) Stereo depth simply can't be as accurate. You can do additional filtering on the host side (eg. WLS) to improve the noise, but that won't make depth more accurate.
      Thanks, Erik

      6 days later

      Enabled required on-device Depth Filters in a branch for quick test and shared with you on Discord.
      Adding here for visibility.