I'd like to have a different display image size than the input size required for some Neural networks.
I found a solution for this here:
This works well if the input size of the NN is known, but is there also a way to determine the needed NN input size automatically?
Any help will be greatly appreciated.
Hello Gi_T ,
I don't think it's currently possible in the library, but this should get added later on. It would be possible to parse the .blob file on the host and retrieve the model input shape, but I'm not familiar enough with the openvino to suggest anything. This might help you: blob class.
Thanks for the quick response erik. I'll try using OpenVino to get the input size and let you know if I have any successful attempts.
May I ask how you get the NN input size for warning messages like the following?
[14442C10E15C5CD700] [35.974] [DetectionNetwork(2)] [warning] Input image (300x300) does not match NN (62x62) - skipping inference
The firmware does the parsing on the device, where it determines the shape. To be easy to use & useful to the user, we would have to parse the blob on the host - so it's also accessible before initializing the device, plus easier to add to the API.