I am trying to run a segmentation model on OAK-D but I am not using the images from the camera.

I am loading the images from my local and then running my model on OAKD. I am not getting similar results. Is there any preprocessing or post processing I have to do on my images?

Hi @Tanisha
You have to make sure the W and H are correct and that the frame type matches your image. Set it to interleaved RGB, I think it should work.

Thanks,
Jaka

So my model expects a planar, I set the frame type as frame.setType(dai.RawImgFrame.Type.RGP888p) I still don't get the correct visualization.

    Hey Tanisha ,

    Please see here how we read and process the image. You will have to make sure image is in CHW format before sending it to the camera.

    22 days later

    So I tried everything but I still can't get the segmentations right. I think the reason might be that I'm using a pytorch deeplabv3plus model while all I've seen in the documentations is the tensorflow one.
    I am using deeplabv3plus from this library.
    qubvel/segmentation_models.pytorch
    import segmentation_models_pytorch as smp

    Do you think the architecture could be one reason blob doesn't get the segmentations right?