Hello,

I have created my custom dataset with 2.5K annotated images for my yolov8 nano segmentation model. I have succeed in training the model, convert it to onnx and then to blob using the https://blobconverter.luxonis.com tool. However, when I run the inference on my Luxonis camera there are no segments detected. But if I run my onnx weights on regular CPU of the raspberry pi I can see the segments detected. I would like to know what is causing this behavior? When I converted the weights from onnx to blob I set mean as 0 and scale to 255 values.

Hi @AlejandroDiaz

  • Check the input image you are passing to the NN, make sure it's what you expect
  • preview input is RGB UINT8, recheck the scaling to match CPU side (be aware of BGR/RGB)
  • what kind of output are you getting?

Thanks,
Jaka

Hello @jakaskerl

I'm using a saved video that I sent to the OAK device for inference. Here is the code of the steps I'm doing it.

def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:

  return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

# Prepare frame to be sent to the device

      frame_nn = dai.ImgFrame()

      frame_nn.setData(to_planar(video_frame, (640, 640)))

      frame_nn.setType(dai.ImgFrame.Type.BGR888p)

      frame_nn.setWidth(640)

      frame_nn.setHeight(640)

       # Send frame to the device

      qVideo.send(frame_nn)

I took that part from one of the examples on github, don't remember which one now.

Here is the info about my outputs.

+++ Output layer info +++

Layer 0

Name: output0

Order: StorageOrder.CHW

dataType: DataType.FP16

dims: [1, 37, 8400]

Layer 1

Name: output1

Order: StorageOrder.NCHW

dataType: DataType.FP16

dims: [1, 32, 160, 160]

    AlejandroDiaz
    Looks fine, but just maybe try setting the type to interleaved, iirc the depthai expects interleaved output so that it can convert it to planar inside the pipeline.

    Thanks,
    Jaka

    Hello, I don't understand where should I set the type to interleaved?

    Best regards,