I have a project that requires me to get depth measurements that range between 4-7m. Can the OAK-D produce useful depth measurements for that range, and what ways are there to only get measurements in that range?
Yes, that distance is definitely doable. And yes on the host depth ranges not in this range can be discarded. I'm not sure if we support throwing away any depth not in that range on DepthAI itself or not.
Here's an example of even longer depth range:
Keep in mind we plotted the birds-eye view wrong. We plotted world-y coordinates on the left/right of birds eye, when we were meaning to plot the world-x coordinate on left/right. So the whole birds eye left/right is totally wrong. Just a plotting error we didn't notice.
So the depth seems to work OK to about 25 to 30 meters.
Thanks for the quick reply Brandon. For the project, we also intend to stream the depth data and display it on a JavaScript website (React). We are already streaming video data from a thermal camera with Pion, a WebRTC framework written in Golang.
I saw the Remote Access thread and wanted to ask if there are any WebRTC examples sending the depth data from Python or C++ and receiving the data in JavaScript? Or are there any other technologies or methods you can recommend for achieving this task?
@Brandon the examples from @msvlavascu have many features, so it's pretty hard to find the needed parts for only transmitting the depth data. Is the WebRTC example from @Luxonis-Lukasz open-source? If so, I could probably update it to Gen 2 myself and create a pull request.
Brandon I had a results aggregation server that collected the data from multiple devices in Gen1, but I haven't use WebRTC with DepthAI yet.
I have however used it in another project where I accessed the web camera using JavaScript, sent the video to the server, did inference on the frames and sent these back to the JS client.
I think that we should have the WebRTC experiment along the mjpeg one, so will implement it soon. In the meantime, Gi_T I can give you access to the WebRTC project I did, if you'd like to give it a try yourself (with some explanation how ICEs and offers work if needed)
@Brandon I mounted my OAK-D on my car today and tried to capture depth data using the depth_preview.py script and OBS, but my results weren't nearly as good as shown in the video (I couldn't detect objects clearly even when they where really close). What is the difference between the depth_preview.py script and the script shown in the video?
Hello Gi_T,
so this code experiment was used when creating that video: https://github.com/luxonis/depthai-experiments/tree/gen2-replay/gen2-replay (with subpixel mode) in combination with vehicle-detection-adas-0002 NN model. Also the stereo confidence is set to 240 instead of 200 (as in depth_preview script). This actually reminds me that we should change all example codes to 240 conf.
Thanks, Erik
I just tried it out but I'm getting the following error messages.
TypeError: cannot pickle 'depthai.Device' object
and
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Gi_T I'm not sure what you are doing with pickle or freeze_support, but I'm sure that's not in the original script (replay.py or record.py). Could you elaborate on this error please?
Thanks, Erik