- Edited
Hi everyone,
following the steps in the tutorial 'Training a Tiny YOLOv4 Object Detector with Your Own Data', I successfully trained my own Tiny YOLOv4 to detect some different objects and I successfully ran it on the OAK-D.
Unfortunately, I would like to detect some object that is very small in the resized image, and so the performances on these objects are poor. In order to overcome this issue, I tried to increase the input resolution from 416x416 to 608x608, gaining accuracy on small objects. However, If I try to run this model on the OAK-D I get this warning and no predictions:
[warning] Input image (608x608) does not match NN (416x416).
It seems that the generated blob model has not the correct input dimensions, but I cannot find anything that could be changed in order to solve this problem.
In the colab notebook, I changed the width and height in the yolov4-tiny.cfg file and the trained model seem to be correct since in the output I see that the first layer is 608x608 and the accuracy increased.
Following the answer of @GergelySzabolcs in this discussion, I also added the argument size in python3 convert_weights_pb.py --size 608
and changed the height and width in the OPENVINO-YOLOV4/cfg/yolov4-tiny.cfg to be 608.
Doing that the generated xml file shows that the first layer is effectively 608x608.
Nevertheless, after the conversion to .blob the model doesn't work, giving the warning written before.
Does anyone have the same issue?
Thanks
Davide
PS in order to run the model on the OAK-D i'm using the depthai_demo.py file