Hi Erik,
Currently YOLO V8 on oakd supports only detections. By when you can update the model with fetching the coordinates(x,y,z)?
Regarding YOLOV8 spatial Detection
Hi susantini ,
So actually the SpatialDetectionNetworks combine depth information with object detection models to get the 3D coordinates (see 3d object localization docs). So using YoloSpatialDetectionNetwork should already do that for you. Thoughts?
Thanks, Erik
erik I tried with keeping YoloSpatialDetectionNetwork but its not working.THe frame itself is not opening.Can yolo confirm if yolov8 model on oak just gives detection or detection with spatial coordinates
erik Good afternoon, I have an idea for collecting garbage. I ordered an Oak-D Pro camera to determine the coordinates of the garbage, and I'm currently waiting for the camera to be delivered. Before that, I trained the Yolov7tine and V8n models, and also read the instructions at https://github.com/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV8_training.ipynb.
I would like to clarify if I can now convert the weights of the Yolov8 model to the .blob format and specify the path as yoloSpatial.setBlobPath("/full/path/to/your/yolov8н.blob"). Will this allow me to determine the spatial coordinates? If not, please tell me which models this function supports. Thank you.
P.S.
I couldn't find the answer on Google.P.S.
Couldn't find the answer on Google.
Hi SerdjFity ,
Yes, from .pt you can use tools.luxonis.com to get the zip (with bin, xml, json, blob) and you can then use device decoding demo to get spatial coordinates of your objects:
https://github.com/luxonis/depthai-experiments/tree/master/gen2-yolo/device-decoding#yolo-with-depthai-sdk-tutorial
I hope this helps!
Thanks ,Erik