- Edited
Sorry about the delay. Let me pull in Erik and/or Steven to help on this.
Sorry about the delay. Let me pull in Erik and/or Steven to help on this.
Just got the answer internally:
I think the config is missing this:
https://github.com/luxonis/depthai/blob/5461419/depthai_helpers/config_manager.py#L256
https://discord.com/channels/790680891252932659/791018949143429121/802360109275676682
So this is the flag that tells DepthAI to calculate the depth for the bounding box.
Thanks,
Brandon
That did the trick!
I actually circled around that line "calc_dist_to_bb", but I thought it was calling a function which I never found.
But it was a simple True o False statement.. damn it
Thank you Brandon!
Some thoughts:
I guess a lot of people will use these cameras on embedded devices/robots etc.. which really doesnt care about the actually video stream (like me). How about producing examples which outputs the information to the console? Or create a pipeline which is easy to hook up to to get information. Like Im doing now, I want the camera to alert me on things it sees, with the information about depth, and where the object can be found in the camera lens etc.. So my robot can take actions.
Another thing I want to ask you about.
We have this depth view where the image changes color based on the distance to the camera, how can I get that info as text/values to the console? One idea I have is to use the camera as a "radar", to alert me when something is too close to the robot, I dont care what the object is, just that something is really close -> STOP.
Thanks again!
//Carl
Hello Carl,
the video stream is really helpful when we are prototyping/developing a solution and we want to quickly see for example "what on the video got classified as an [object]" to conclude whether our NN is working as expected.
Many examples actually provide the NN result as well (for example age/gender, classification, ocr, people counter, etc. The result is usually just not printed to the console but rather shown on the actual video stream (for easier development).
WRT your question, the video stream of the depth is essentially just an array of distances from the depthAI, so you could easily access that and check if (an area of) distances in front are too close, stop.
Thanks, Erik
Hi Erik,
Yes, I totally agree with you. The video stream is very useful for debugging /prototyping. I was just asking for some extra examples on how to tap the information as text/values for easy integration. I actually have a working camera now on a robot and avoiding obstacles.
However I find the depth information to be very inaccurate at most times. Is there anything that can be done to raise the quality? The robot is using the code above in this thread.
Also, how can get that depth information from the depth map as arrays? Is there an example somewhere or if you can point in me in the right direction.
Your help is much appreciated! Keep up the good work!
Thanks,
Carl
Hello Carl,
you could check for example this line, if you add print(age) or print(gender_str), it will print the age or gender to the console.
For the depth noise, you could check out filtering, here's a great wsl filter example. Hopefully, this will help in your situation.
In above example (wls-filter), try adding print(type(filtered_disp))
to main.py after line 120 and you will see that filtered_disp
is of type numpy.ndarray.
Thanks, Erik
Thank you Erik,
I will take a look at this as soon as I can, compiling "opencv-contrib-python" as we speak. Takes forever on RPi
First real outdoor test with my test robot:
OneDrive link, about 3 min long:
https://1drv.ms/v/s!Apv2S-u3rGa7jcYNdeS0GjTQXSq8kQ
This a custom pcb board I designed which runs on a ARM mcu, and one RPi that snaps on to the board.
In this video Im using the disparity stream which detects objects in three ROI, left, front and right.
(Sorry about the Swedish words in the video, I presented this to a group of R&D people.)
That's awesome @Selective ! Thanks for sharing! Do you mind if I put that on LinkedIN/Twitter?
Thanks again,
Brandon
Thanks!
Go for it
Apologies for resurrecting this thread but: a) I only started working with Python a few days ago and I haven't found a solution myself; b) searching this forum and via Google only turned up this thread.
I have been trying to write detection data to a CSV file, first by modifying the demo code, then using the code by @Selective, above. My version is here.
The error shown is:
Traceback (most recent call last):
File "ToCSV.py", line 37, in <module>
DepthAI().run()
File "ToCSV.py", line 29, in run
nnet_packets, data_packets = self.p.get_available_nnet_and_data_packets()
AttributeError: 'DepthAI' object has no attribute 'p'
This means nothing to me (it does appear to be DEFined). Please would it be possible give a newby a prod in the right direction?
Thank you,
Hugh
Hello @HughN ,
your code is actually gen1 of our library. We now have gen2 (docs here), which is much more flexible and powerful. Sorry about the change - but your script won't work with latest depthai library.
I would suggest starting from eg. this example code, and add CSV saving by yourself (after line 139
, where you get spatial detections).
Thanks, Erik
Great, thanks for the feedback