Hello,
I am working with the OAK-D device. I followed the steps presented in https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html and converted my customized tiny yolov2 to a .blob object. When launching depthai_examples yolov4_publisher.launch I get the following error:
[SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '5070'. Define at pipeline build time using: 'setAnchorMasks' for 'side5070'.
SpatialDetectionNetwork(1) [error] Mask is not defined for output
Hey ajanani ,
I am not sure of the exact output of the converted YoloV2 model, but I would say that on-device decoding is not supported for YoloV2. You would have to instead use NeuralNetwork node to get the detections, decode them on host, and pass them back to SpatialLocationCalculator.
However, I'd strongly suggest you to use either YoloV3/V4-tiny or YoloV5 model, for which we provide tutorials here and have on-device decoding, instead of older YoloV2.
Thanks
what is mean by this mask is not defined for output layer 3549, how to define pipeline build time for this ?
please explain?
H ey, masks are used in models that use anchors. It means that you are using an anchor-based model and that you've wrongly specified the masks. Mask should look like "mask[input_width/stride]" : [anchor indices], where stride corresponds to the downsampling factor of the head.
If you are not that familiar with neural networks, it's a bit complicated, so if you use YoloV5-YoloV8, I'd suggest you just use tools.luxonis.com and upload the trained .pt weights there. The downloaded ZIP will contain .blob and proper anchor and mask settings in the .json file.