• Best option to train object detection

What is the best option (with good FPS performance) to train object detection from a dataset, get the blob and run it on oak-d w?

I have checked https://github.com/luxonis/depthai-ml-training/tree/master/colab-notebooks, however the notebooks for SSD MobileNet V2 are deprecated and seem not to be working in Colab any more.

Another question: is it possible to convert a roboflow model to blob? (There is an example of using robflow models using the depthai_sdk, but it is not a solution for me!)

    This is a good starting point:

    For the environment I use anaconda and follow the pytorch install instructions:

    Check your OS and Conda and CUDA (if you don't have CUDA then I would get a card before you bother training), and do what it says.

    Then do:

    git clone https://github.com/ultralytics/yolov5.git

    followed by:

    cd yolov5
    pip install -r requirements.txt 

    Once you have a dataset put them in the correct directory structure and modify data.yaml to point to the directories and start training:

    python train.py --img 256 --batch -1 --epochs 500 --data data.yaml --weights yolov5n.pt --save-period 100 --patience 100 --name yolov5-nn --cache --multi-scale

    Thanks, guys!

    I am using the OAK-D Wide with 150 FOV, with stereo and color frame aligned. With the mobilenet-ssd's model I achieve about 20 to 25 FPS.

    I have tried to deploy a Yolov5 model, however, the FPS is about 7 FPS. I will try V6 or V8 to see if the FPS increase.

      LeonelSimo

      I find yolov5 to be the fastest. I used the yolov5n as initial weights set the largest possible batch size and img-sz 256. I consistently get 20 - 23FPS doing two-stage detection (two running two models concurrently) on my Oak 1 Lite doing inference completely on the device so it is totally doable.

        5 days later

        Hi vital, I confirm that your recommendations work perfectly. In my case I will use 320 image size, which seems to be a good compromise between accuracy and fps.