- Edited
Are there any (Python preferred) examples of getting non-image based summary results like:
- Smallest depth value and 2d position in image
- List of recognized objects [label, x, y, z, confidence]
- Heading and Position relative to a lane with lane center curvature
- "Follow behind walking human data" [x,y,z of feet/center of human, confidence]
- obstacle crossing a line in successive images
Looking over the depthai_experiments repository, it seems like the social distance example might be a good start for a "make robot follow behind walking human" and maybe I can figure out how to make the video output optional. I guess as a start I can see what the performance is throwing away the frames.
Still hoping someone has an example of throwing away the frames on the device rather clogging the USB with unwanted video frames.