• LuxonisHub
  • Unable to convert yolov5.pt model into blob version in luxonis tools

Hi @nik, could share the .pt file?

Hope you can access it

Interesting, the export is failing because your model architecture does not strictly follow the YOLOv5 design. The head appears to be anchor-free, which is a characteristic of later YOLO versions like YOLOv8. How did you obtain the .pt model?

  • Edited

Ohh, It makes sense now. Instead of setting a different environment for YOLOv5 I used the same env as of YOLOv8 to train YOLOv5 nano model.

But the following website shows the similar code for YOLOv5 to train as of YOLOv8 (Reason for confusion)
https://docs.ultralytics.com/models/yolov5/#usage-examples

Following is the code from website I referred. But environment was same as we use to train latest models

from ultralytics import YOLO

# Load a COCO-pretrained YOLOv5n model
model = YOLO("yolov5n.pt")

# Display model information (optional)
model.info()

# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)

# Run inference with the YOLOv5n model on the 'bus.jpg' image
results = model("path/to/bus.jpg")

I will update the environment and test the model again, thanks.

Just now I realized, the one I referred above to train the YOLOv5 is anchor free 🙂

That must be it, nice find! I suggest you re-train the model using the standard YOLOv5 head (or switch to higher YOLO version if you want the head to be anchor-free).

Hi Nik!

I just went forward and tried exporting your model locally (the web app is slightly deprecated). Doing so, I now manage to export the model. Can you try the following?

# Cloning the tools repository and all submodules
$ git clone --recursive https://github.com/luxonis/tools.git

# Change folder
$ cd tools

# Install the package
$ pip install .

# Change folder
$ cd ..

# Run the conversion
$ tools <path/to/YourModel>.pt --imgsz "<YourInputShape>"

Please report back if this works for you now.
Best,
Jakob

7 days later
  • Edited

Hi @jakob, I was able to export the blob model and it gave me onnx model and config json.

from onnx I was able to convert to blob using (https://blobconverter.luxonis.com/)
but the model is rarely detecting anything (Maybe I need to increase the dataset. currently its 250 images).
I have attached the .pt , onnx and converted blob model for reference.

Also when I referred this link (luxonis/depthai-pythonblob/main/examples/Yolo/tiny_yolo.py
) for testing the yolov5 converted blob model the oak-1 crashes with following log

'
import cv2
import depthai as dai
import numpy as np

Configuration

MODEL_PATH = 'blobs/correct.blob'
ANCHORS = [
10.0, 13.0, 16.0, 30.0, 33.0, 23.0,
30.0, 61.0, 62.0, 45.0, 59.0, 119.0,
116.0, 90.0, 156.0, 198.0, 373.0, 326.0
]
ANCHOR_MASKS = {
"side52": [0, 1, 2],
"side26": [3, 4, 5],
"side13": [6, 7, 8]
}
CONF_THRESHOLD = 0.3
IOU_THRESHOLD = 0.2
NUM_CLASSES = 1
VIDEO_SIZE = (416, 416)
SENSOR_RES = dai.ColorCameraProperties.SensorResolution.THE_4_K
ISP_SCALE = (1, 1)
FPS = 30
DEVICE_MAC = "1844301091F11CF500"

def frame_norm(frame, bbox):
"""Normalize YOLO bbox coords to image pixels."""
h, w = frame.shape[:2]
return (
int(bbox.xmin * w), int(bbox.ymin * h),
int(bbox.xmax * w), int(bbox.ymax * h)
)

def create_pipeline():
pipeline = dai.Pipeline()

# Color camera
cam = pipeline.create(dai.node.ColorCamera)
cam.setResolution(SENSOR_RES)
cam.setIspScale(*ISP_SCALE)
cam.setVideoSize(*VIDEO_SIZE)
cam.setInterleaved(False)
cam.setPreviewSize(416, 416)
cam.setPreviewKeepAspectRatio(False)
cam.setFps(FPS)

# YOLO detection network
nn = pipeline.create(dai.node.YoloDetectionNetwork)
nn.setBlobPath(MODEL_PATH)
nn.setConfidenceThreshold(CONF_THRESHOLD)
nn.setNumClasses(NUM_CLASSES)
nn.setIouThreshold(IOU_THRESHOLD)
nn.setAnchors(ANCHORS)
nn.setAnchorMasks(ANCHOR_MASKS)
nn.setCoordinateSize(4)
nn.setNumInferenceThreads(2)
nn.input.setBlocking(False)

# Outputs
xout_cam = pipeline.create(dai.node.XLinkOut)
xout_nn  = pipeline.create(dai.node.XLinkOut)
xout_pass = pipeline.create(dai.node.XLinkOut)

xout_cam.setStreamName("rgb")
xout_nn.setStreamName("detections")
xout_pass.setStreamName("pass")

# Link nodes
cam.preview.link(nn.input)
nn.out.link(xout_nn.input)
nn.passthrough.link(xout_pass.input)
if True:  # sync NN + frames
    nn.passthrough.link(xout_cam.input)
else:
    cam.preview.link(xout_cam.input)

return pipeline

def main():
pipeline = create_pipeline()

with dai.Device(pipeline, dai.DeviceInfo(DEVICE_MAC)) as device:
    q_rgb = device.getOutputQueue("rgb", maxSize=4, blocking=False)
    q_det = device.getOutputQueue("detections", maxSize=4, blocking=False)

    while True:
        in_rgb = q_rgb.get()  # blocking
        in_det = q_det.tryGet()

        frame = in_rgb.getCvFrame()

        # Draw detections
        if in_det:
            for det in in_det.detections:
                x1, y1, x2, y2 = frame_norm(frame, det)
                cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)

        cv2.imshow("YOLO", frame)
        if cv2.waitKey(1) == ord('q'):
            break

cv2.destroyAllWindows()

if name == "main":
main()
'

with dai.Device(pipeline, dai.DeviceInfo(DEVICE_MAC)) as device:
Stack trace (most recent call last):
#31 Object "", at 00007FFD3AED99B8, in PyInit_depthai
#30 Object "", at 00007FFD3AED908B, in PyInit_depthai
#29 Object "", at 00007FFD3AEDD8DD, in PyInit_depthai
#28 Object "", at 00007FFD3AED68FE, in PyInit_depthai
#27 Object "", at 00007FFD3AEA3344, in PyInit_depthai
#26 Object "", at 00007FFD3AEB9CC1, in PyInit_depthai
#25 Object "", at 00007FFD3AEB82AB, in PyInit_depthai
#24 Object "", at 00007FFE28F73C66, in RtlCaptureContext2
#23 Object "", at 00007FFD3B04F86E, in PyInit_depthai
#22 Object "", at 00007FFD3B058B40, in PyInit_depthai
#21 Object "", at 00007FFD3B0928B4, in PyInit_depthai
#20 Object "", at 00007FFD3B04CB8F, in PyInit_depthai
#19 Object "", at 00007FFE262B565C, in RaiseException
#18 Object "", at 00007FFE28F24475, in RtlRaiseException
#17 Object "", at 00007FFE28EEE466, in RtlFindCharInUnicodeString
#16 Object "", at 00007FFE28F7441F, in _chkstk
#15 Object "", at 00007FFD3B04BE25, in PyInit_depthai
#14 Object "", at 00007FFD3B04F3F1, in PyInit_depthai
#13 Object "", at 00007FFD3B04F38C, in PyInit_depthai
#12 Object "", at 00007FFD3B04E581, in PyInit_depthai
#11 Object "", at 00007FFD3B04DD09, in PyInit_depthai
#10 Object "", at 00007FFD3B04BA32, in PyInit_depthai
#9 Object "", at 00007FFE28EEFD54, in RtlUnwindEx
#8 Object "", at 00007FFE28F7449F, in _chkstk
#7 Object "", at 00007FFD3B04A84C, in PyInit_depthai
#6 Object "", at 00007FFD3B04BE25, in PyInit_depthai
#5 Object "", at 00007FFD3B04F3F1, in PyInit_depthai
#4 Object "", at 00007FFD3B04F2B9, in PyInit_depthai
#3 Object "", at 00007FFD3B04B489, in PyInit_depthai
#2 Object "", at 00007FFD3B050017, in PyInit_depthai
#1 Object "", at 00007FFD3B058BAA, in PyInit_depthai
#0 Object "", at 00007FFD3AEB72F4, in PyInit_depthai

could you give me updated or fixed version of the code to test the blob model before directly running in android device.

Also let me know if I am doing something wrong.

Note: the existing model is working fine from the android code repo. And once I had received some detections from the
custom model that I had built the way I mentioned above.

Thanks

Blob

Config json

Onnx

Pt

Hi again @nik,

great to hear that the conversion is working for you now!

I've tried running your model on an OAK device and I manage to get some predictions out of it using random image input. Nonetheless, it's hard for me to validate the model performance as I don't know what it's trained for.

If you are having problems running the model on device, I'd suggest you check out our collection of inference examples (read more about it HERE). You can start with the generic-example that showcases a simple single-stage inference pipeline. I've used it to run your model and it works out-of-box for me.

All best,
Jakob

  • Edited

Everything mentioned above is working fine. The only catch was following part of the code where inference was not actually running on preview size 416

this fixed it

    camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_4_K);
    camRgb->setPreviewSize(416, 416); // forgot to use it
    camRgb->setIspScale(26,135);
    camRgb->setVideoSize(416, 416);        // 416 × 416 for NN & preview

Currently getting 20FPS on 4K resolution in android application connected to luxonis OAK-1.

Thanks for help 🙂

Good catch! I'm glad it's working now 🙂

7 days later
  • Edited

Hi @jakob, what could be the possible reason for Oak-1 Fixed Focus camera to stay blurry for far objects (Like 30-35ft)?
The fixed focus camera focuses properly for closer objects but stays blurry for objects which are further distance.

Hi @nik,

I might not be the best person to answer this. Tagging @jakaskerl to delegate someone else.

Best regards!

Hey @nik,

maybe because it’s so far, the camera can’t keep it sharp. FF is not always good for long distance. Can you send image how it looks? Easier to see what’s wrong.