I am wondering if there will be official support for YoloV9 on the tools.luxonis.com website. I was able to compile the ultralytics V9 weights using the V8 setting, but am wondering if there is "more official" support coming or if V9 is completely compatible with V8 settings.

I could not find anything regarding V9 support on the forum/docs/etc.

Thanks!

  • Hi @justincdavis,

    that's a good question. Thank you for that! Yes, you are able to convert the YOLOv9 weights from Ultralytics using the YOLOv8 conversion option and then use the exported model on a camera without a problem (I verified it). We plan on improving this by updating the version detector and the docs to officially support the YOLOv9 from Ultralytics.

    Best
    Jan

Hi @justincdavis,

that's a good question. Thank you for that! Yes, you are able to convert the YOLOv9 weights from Ultralytics using the YOLOv8 conversion option and then use the exported model on a camera without a problem (I verified it). We plan on improving this by updating the version detector and the docs to officially support the YOLOv9 from Ultralytics.

Best
Jan

    11 days later

    Hi @justincdavis,

    we've deployed an updated version of tools with the changes mentioned above, there's now a dedicated YOLOv9 export option in tools.

    Here's how it looks like:

    Best,
    Jan

    7 days later

    Hi,

    Thank you for your deploying yolov9 export option!

    I exported Ultralytics yolov9t and applied it object tracking example (depthai-python\examples\ObjectTracker\object_tracker.py) but failed to detect and tracing the object.

    There was no specific error however no bounding box and labels on the output view captured by OAK-D pro W PoE camera.

    Need help somebody to resolve problem.

    Best,
    Oscar

    Hi @JungOscar,

    The reason why the depthai-python\examples\ObjectTracker\object_tracker.py example wasn't working for you is because the example was created for MobileNet-SSD model, so it uses MobileNetDetectionNetwork node from DepthAI, but for YOLO models you need to use YoloDetectionNetwork.

    We have another more suited experiment for your needs, and that is gen2-deepsort-tracking (link), which uses the YOLOv6 model; however, if you place the yolov9t.bin, yolov9t.xml, and yolov9t.json files in the root folder of this app, and replace the line 32 of the main.py script
    yolo = oak.create_nn('yolov6nr3_coco_640x352', input=color)
    with this:
    yolo = oak.create_nn('./yolov9t.json', input=color, nn_type='yolo')
    Then it will work.

    Kind regards,
    Jan

    Hi @"JanCuhel"

    Thank you for your support! 

    I tested yolov9t model using the gen2-deepsort-tracking experimental app according as instructed and It worked successfully.

    Next, I tried using my custom YOLOvn-based model trained to detect “fish”. It seemed to work initially, but the app soom encountered the following error and stopped:

    [14442C101148F3D600] [59.19.225.203] [61.171] [ImageManip(8)] [error] Invalid configuration or input image - skipping frame

    [14442C101148F3D600] [59.19.225.203] [61.193] [ImageManip(8)] [error] Not possible to create warp params. Error: WARP_SWCH_ERR_CACHE_TO_SMALL

    This might not be related to an issue with exporting the NN. Any advice you could provide would be greatly appreciated.

    Best,

    Oscar

      @JungOscar, could you please share your YOLO fish detection model with us and provide us with more info about your model, such as whether it is YOLOv9, whether you used our tools to export it, and, more importantly, what is the input image shape with which you exported the model? We'll take a closer look and get back to you.

      Best,
      Jan

        JanCuhel

        Hi,

        I have sent my model via email(support@loxonis.com). It is attached as a zip file named 'custom_yolo8n.zip'.

        This model was initially built before the export tool supported YOLOv9, so I rebuilt it using YOLOv8 and exported it with your tool. The input image size is 640x640. You can find details about the dataset I used at this link: https://universe.roboflow.com/megaplanoscarjung/mackerel-ojdbq.

        Thank you for your support and I would really appreciate it if you could take a closer look!

          JungOscar

          thank you. I'll take a look and let you know as soon as I find something!

          Kind regards,
          Jan

          JungOscar
          I don't think there is a problem with the model.

          JungOscar [14442C101148F3D600] [59.19.225.203] [61.171] [ImageManip(8)] [error] Invalid configuration or input image - skipping frame

          [14442C101148F3D600] [59.19.225.203] [61.193] [ImageManip(8)] [error] Not possible to create warp params. Error: WARP_SWCH_ERR_CACHE_TO_SMALL

          These two errors above are not related to NN. Could you add a MRE code so we can test locally?

          Thanks,
          Jaka

          @JungOscar, Yes, @jakaskerl is correct; the exported model is working (though it recognizes other objects as fish, it's not crashing). I wanted to check that it is. What is your goal? Do you need to set up a tracking pipeline, or is detection enough for you?

          For testing an YOLO object detection model I used this experiment (link).

          Best,
          Jan

            JungOscar
            The NN is quite complex and can only run at about 5FPS. The WARP_SWCH_ERR_CACHE_TO_SMALL is due to the big image size, there are 3 ImageManip nodes in the pipeline which all share resources. The bigger NN size maxes out these resources.

            Consider using a smaller input size for NN.

            Thanks,
            Jaka

              JanCuhel I don't understand your question, but just want to create applicaiton using cutom NN based on YOLO. I will follow your linked experimental guide next time.

              jakaskerl I understand your points. I might need guidance on reducing input size for the neural network. Any documentation or sharing of experience or knowledge on this would be helpfull.

              Thanks,

              Oscar

              jakaskerl Hi,

              I will change the resolution to a smaller width and height in tools.luxonis.com.

              Thanks,

              Oscar