Hi Luxonis Team,
Is there a way to inspect quantized weights and layer properties after blob conversion? I have an ONNX regression model that, when converted, has significant error compared to the unconverted result.
Here are the my specifications on the online blobconverter tool:
Model Optimizer params: --data_type=FP16 --input_shape=[1,768,768,1]
Compile params: -ip FP16
I'd like to try "simulate" what's going on to understand if F32 to F16 quantization is the culprit, or if there is something else going on when running on the MyriadX. So far, I've tried to reproduce it in tensorflow using F16 quantized weights and that yields results much closer to the unconverted result.