I'm very new to computer vision so am just figuring out the workflow. Any help would be appeciated. As I'm still figuring out the workflow, I'm only making a very simple model that can detect lemons.
I uploaded my images to Roboflow, generated the train/test/val dataset of these, and downloaded the zip for the Yolov8.
From here, I followed the steps here: https://docs.ultralytics.com/usage/python/
The order for codes I ran was as follows.
from ultralytics import YOLO
model = YOLO('yolov8n.yaml')
model.train(data='path/to/data.yaml', epochs=100)
model.val() # It'll automatically evaluate the data you trained.
From here, I ran prediction on the best.pt and was happy enough with the results.
from ultralytics import YOLO
# Load a model
model = YOLO('path/to/best.pt') # pretrained YOLOv8n model
# Run batched inference on a list of images
results = model(['path/to/images.jpeg',], save=True) # return a generator of Results objects
# Process results generator
for result in results:
boxes = result.boxes # Boxes object for bbox outputs
Next, I converted the file to an ONNX format with the following code:
from ultralytics import YOLO
model = YOLO('path/to/best.pt')
model.export(format='onnx')
My understanding is I can simply upload an onnx to http://blobconverter.luxonis.com/
But this wasn't working so I'm assuming I did something wrong?
So I tried this script which wasn't any better:
import blobconverter
blob_path = blobconverter.from_onnx( model="path/to/best.onnx", data_type="FP16", shaves=5,)
Any help on what to do next would be appreciated!