Hello,

I'm looking for a better solution to do object detection/tracking on a custom dataset.

I'm currently running YoloV8 nano on the Oak camera, and I get about 13-14 FPS. The camera is connected to a Pi (model 4-B).

I checked other examples from the depthai-python/examples repo, mainly the MobileNet SSD, and Tiny YoloV4, and they run at 30+ FPS. However, both models seem to be deprecated/old, and I couldn't find a way to convert the weights file to the required .blob one (https://tools.luxonis.com/ seems to support only Yolo V5 and newer)

Are there other supported object detection models?
Any (up-to-date) resource to get the correct weights file from the older models (both tiny yolo and Mobilenet used TF 1.X, I train everything in collab, and they stopped supporting TF 1.x)?
Lastly, less preferred, will changing the Pi to something else, such as Jetson will help in any way?

DepthAIMachine Learning #object-detection #fps

    Hi leeor
    Only way to use TF 1.x is to do it locally in a notebook (there might be some hackish way to do it in collab as well). Yolov6 is the fastest variant on RVC2 VPU so I recommend training the models for v6.

    leeor Lastly, less preferred, will changing the Pi to something else, such as Jetson will help in any way?

    All AI and image processing is performed on the RVC2, so changing the host will offer no significant performance improvement. Unless you were to run the models directly on jetson HW.

    Thanks,
    Jaka

      jakaskerl Thank you for the feedback.
      Is there a way that you know of to train a model for this edge device (such as tiny yolo, mobilenet, or anything else)?

      I will try to use the v6 today, but if you are aware of other models that can be trained on a custom dataset and used on the OAK, I'd be happy to know which ones!

        @leeor make sure you are using 3.0 speed high speed USB cable, this is something I had overlooked which resulted in lower fps. Also another thing to keep in mind is the number of Shave while converting model to .blob. 6 shave gave me 15 fps while 5Shave gave me 25 fps while running inference. Hope this helps!

          Priyam26 Hi!
          I appreciate the feedback!

          Currently, it is using a PoE cable (which last night I learned it's not ideal). However, I use Oak - 1 PoE so I'm not sure if there is any other option for me.

          Also, it's using 6 shave, I'll look into that as well, thanks., I heard from my friend that when he changed the shave values from 6 he got error, but I'll try.

          Hi @leeor
          1Gbps cable (CAT5E) in shouldn't be a problem unless you are sending big frames (or many) back to the host.

          The depthai library will warn you if using a smaller shave count would benefit the inference speed. It's mainly so you can run the detection in two separate threads instead of one.

          Thanks,
          Jaka

          Thanks @jakaskerl and @Priyam26
          I tried the 5 shave, it didn't impact the performance.

          Another question if you might know.
          I have 2 cameras in two different places, but other than that, they are pretty much similar. One is getting 14-15 FPS the second is 8-9. Also, my connection to the host there is very slow when I scp a file there, it takes longer than another cam I have (the 14-15 fps takes much less than the one with the 8-9). Also, running the bandwidth test, the camera with the 15 fps returned about 990, while the one with the bad connection did only about 90.

          Is there any connection between these facts and the lower FPS? All the processing is done on the camera, so I'm not sure if there is a connection or just a coincidence.

            Interesting, I'll check into it.
            Thanks!