Hey!
So for some time now, i've been working on using the Oak-D Lite camera, and i've successfully managed to incorporate a YoloV5 model. The issue is that the original model that i've made is of size 416x416 and the image returned is a bit small. For some time now i've been trying to re-train the model using the notebook you provided, with slightly larger image size but for some reason it simply does not work.
The pipeline starts regularly, and the preview image is indeed larger but the following error keeps occuring in the console:
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '80'. Define at pipeline build time using: 'setAnchorMasks' for 'side80'.
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '40'. Define at pipeline build time using: 'setAnchorMasks' for 'side40'.
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '20'. Define at pipeline build time using: 'setAnchorMasks' for 'side20'.
The anchors and anchor masks in the cfg json are as following :
"anchors": [
10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326
],
"anchor_masks": {
"side52": [0, 1, 2],
"side26": [3, 4, 5],
"side13": [6, 7, 8]
},
What could be the issue here? Any help is appriciated!