Hey,
I am trying to run our yolov5 model on the oak-d, and receive the spatial coordinates (x,y, and z) of the detected objects. I have been able to get the following working: https://github.com/luxonis/depthai-experiments/tree/master/gen2-yolo/device-decoding, but this only includes object detection and not spatial coordinates. I found this online https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_spatial_detection_network/, but I am not sure how to implement this with the device-decoding to get it working. This is my first time using the oak-d, so any help would be appreciated!
Thanks,
Arjun
Yolov5 Object Detections with Spatial Coordinates
Hi ArjunGoray ,
Have you already tried the main.py
demo in the folder you showed? As it should actually show spatial coordiantes (line here). Is that not the case? Could you share the results you are getting?
Thanks, Erik
Thanks for the reply, I really appreciate it! I had originally used the main_api.py. For the main.py script, should I just put my blob in the model folder and then customize the yolov5.json file? I ask because there is no argument to pass in the blob (the main_api.py had an argument for the blob).
Thanks,
Arjun
- Edited
Hi ArjunGoray ,
You can just extract the zip (downloaded from tools.luxonis.com) into a folder, and point from main.py
's argument -conf
to the .json. This will get the xml/bin (that's specified in json) and compile the model to blob. See tutorial here.
Thanks, Erik
I am trying to use the pre-trained model from the luxonis zoo to achieve the same objective as outlined above. The pre-trained model is in bin/xml format whereas tools.luxonis.com only accepts a .pt file. How to proceed here?
Hi DhruvMahajan , Which pretrained model would you like to use?