• DepthAI
  • Non-image based output examples?

Are there any (Python preferred) examples of getting non-image based summary results like:

  • Smallest depth value and 2d position in image
  • List of recognized objects [label, x, y, z, confidence]
  • Heading and Position relative to a lane with lane center curvature
  • "Follow behind walking human data" [x,y,z of feet/center of human, confidence]
  • obstacle crossing a line in successive images

Looking over the depthai_experiments repository, it seems like the social distance example might be a good start for a "make robot follow behind walking human" and maybe I can figure out how to make the video output optional. I guess as a start I can see what the performance is throwing away the frames.

Still hoping someone has an example of throwing away the frames on the device rather clogging the USB with unwanted video frames.

  • erik replied to this.

    Hello cycob ,

    • So you can get the smallest depth value when using SpatialLocationCalculator node. 2D position in an image - I guess you just mean running object detection on RGB image? If so, example here
    • Example here
    • Lane detection is usually run on the host side (eg. open3d), and we don't have many such demos (yet), only the default open3d one here
    • Isn't this the same as point 2? Getting x/z/y of an object, in this case human?
    • Similar answer to 3rd question

    The social distance demo could be useful for starting, as it already provides 3d coordinates of all people. If you don't want to stream frames to the host computer, just remove the XLinkOut from the pipeline (and link to it)

    Thanks, Erik

      11 days later

      erik If you don't want to stream frames to the host computer, just remove the XLinkOut from the pipeline (and link to it)

      Sounded easy, but I knew it was going to require understanding more than I had at the time.

      Result:

      • Raspberry Pi 3B+ average load decreased from 100% to 2%
      • Oak-D-Lite object with depth processing rate increased from 12FPS to 30FPS
      • Processor Temperature "Load" decreased from 20degC to 2degC for unaspirated heatsink staying well below the soft temperature limit
      • Oak-D-Lite adds 4.5 watt load during operation to the robot's 6W basic load
        (27WHr safely available from battery)

      Created two versions:
      1) Optional Display of image, annotated objects, and depth map
      2) Console only version showing minimum code needed for results only programming

      https://github.com/slowrunner/GoPiLgc/tree/main/Examples/Oak-D-Lite/spatial_tiny_yolo

      The optional display version will allow my robot the best performance, and still allow humans to see through the robot's eyes but only when asked.

      Full Report: https://forum.dexterindustries.com/t/analysis-oak-d-lite-sensor-on-gopigo3/8759?u=cyclicalobsessive

        Hello cycob , thanks for sharing the results! And I agree with you, we should add an example like yours where you can optionally disable streaming frames (for low power/unpowerful host computer usage). Just added it to my todo.
        Thanks again, Erik