Hello ramkunchur, great application! Let me jump straight to the answers:
- Yes, you can use video and stream the frames to the device and perform inference on them. We have several example codes: stereo depth from host, mobilenet from video or running depthai_demo with
python3.6 depthai_demo.py -cnn vehicle-detection-adas-0002 -vid https://www.youtube.com/watch?v=Y1jTEyb3wiI
. If you were using OAK-D (or any other device that has stereo cameras on them), you could use depthai recordings that save mono frames and send mono+color frames back to the device to replay the recording. - You can rotate it using
camRgb.setImageOrientation(dai.CameraImageOrientation.AUTO)
, besidesAUTO
you can also choose between:AUTO,NORMAL, HORIZONTAL_MIRROR, VERTICAL_FLIP, ROTATE_180_DEG
.
Here's an awesome collision detection demo that Noroc created:
This demo was run on OAK-D (so it has spatial coordinates as well) and I believe that stereo cameras -> spatial locations would be beneficial for such an application - for example getting speed/acceleration from differences in spatial locations. I hope this helps!
Thanks, Erik