I created a dataset on Roboflow, downloaded it and used the CLI:
!yolo task=detect mode=train model=yolov8s.pt data={dataset.location}/data.yaml epochs=100 imgsz=640 plots=True
command executed OK, I checked the results with
!yolo task=detect mode=predict model={HOME}/runs/detect/train/weights/best.pt conf=0.25 source={dataset.location}/test/images save=True
which also produced the expected results

.PT FILE converted with tools.luxonis.com works. However if you could help me understand why onnx conversion doesn't work for me, see below:

I then converted the best.pt weight into onnx with the following (also tried with the half=True option):
from ultralytics import YOLO
model = YOLO('runs/detect/train/weights/best.pt') # load a custom trained model
model.export(format='onnx')

then loaded the onnx into blobconverter.luxonis.com
When the pipeline runs, I always get this message (note it is yolo8):
[18443010C1245D1200] [172.34.0.148] [11.853] [DetectionNetwork(1)] [error] Mask is not defined for output layer with width '8400'. Define at pipeline build time using: 'setAnchorMasks' for 'side8400'.

Any help?

    Thor

    Hey, directly converting ONNX to blob will not work because we make some modifications to the model itself, so that we can successfully decode it on device. We prune it before the decoding and concatenation of the bounding boxes which are included in the ONNX. We do this for Yolo models trained with official repositories in tools.luxonis.com, this is why it works when model is exported with the tool. If you are interested in more you can inspect the code here:

    luxonis/toolsblob/master/yolo/export_yolov8.py

    Is there a particular reason you want to manually export the ONNX and not through tools.luxonis.com?

    @Matja
    thanks. There is no specific reason why I wanted to export the specific onnx, but tools.luxonis.com only supports .pt files while blobconverter.luxonis.com seems to be designed to accept onnx and other file formats, so I wanted to give it a try.
    I guess I now understand that I better not use blobconverter with onnx and stick with the tools webpage and .pt

      Thor

      Yes, we do this because we can load the right architecture and modify it. We are exploring generation from ONNX on tools, but it is more dubious, because ONNX files can have some minor differences for the same model but different opset version used at export.

      We do use blobconverter under the hood in tools.luxonis.com, it's just that we use the weights to create the ONNX which produces correct results on device (some layers pruned in the head). In the downloaded package, you can see there is an ONNX which slightly differs from the one you would directly generate from any Yolo repository.