21 days later

Hi, @FS93 . I'm working on a similar application like yours. After you implement Erik's tips what was the new error between real Z coordinate and that measured by OAK-D camera?

  • erik replied to this.
    6 days later

    I have some clarification about OAK Camera
    We can capture the 7-segment display and convert to number ? Using OAK Camera ?

    a month later

    erik What model of the OAK-D POE camera provides error less than 1.5% at 1m ground truth distance ?

    5 months later

    erik Can you please answer Asfanas question here? I am also trying to get the dispartiy converted into depth and can not find in the link you provided the basis for assuming 7.5 as a effective focal lenghth..

    Kindly help as soon as you are able to

    • erik replied to this.

      erik Thank you!!!

      I wanted to just follow up on this, I am using the code given here:
      (RGB, DEPT, CONFIDENCE aligned.py)
      https://github.com/luxonis/depthai-python/blob/main/examples/StereoDepth/rgb_depth_aligned.py

      Essentially I would need to take the disparity output and apply the formula:
      depth=441.25โˆ—7.5/disparity(pixel)
      to calculate the depth??

      Please answer this as I have been stuck on this for past 3 days and my head has been noodling up everday.

      THANK YOU IN ADVANCE!!!!

      Regards,
      Mamoon

      • erik replied to this.

        Hi MamoonIsmailKhalid ,
        If you have 7.5cm baseline distance and 441 [pixels] focal length then yes, that's the correct formula๐Ÿ™‚ Otherwise, I would strongly suggest using depth output instead of calculating it yourself on the host.
        Thanks, Erik

        How would one extract the depth data directly? Any code examples you can refer me to??I am using the depth data to extract depth information and to overlay it on the output of 2D pose estimation (Google Mediapie) to recosntruct a 3D pose of the key points extracted from Google mediapipe

        jakaskerl THANK YOU SO MUCH!! I will try that approach and post here if that solved my porblem. But I really appreciate your guidance already

        2 years later

        afsana Can you please share your code for measuring the depth of the object.
        thanks

        erik can you please provide me the github link or code link to get the depth of the object

        a month later

        hi @erik can we pass grayscale frame to the blob yolov5 model for detection using the color camera? if yes, how?

        Note: We want blob model to process the detections on the grayscale frame instead of the rgb frame using the color camera.

          Unknown
          Place ImageManip in-between the colorCamer and NN and change the frame type to GRAY8.

          Thanks,
          Jaka