[NeuralNetwork(0)] [warning] Input image (224x224) does not match NN (3x224)
I am not talking about the output layer, i am talking about getalllayers -> i expect it to return input and output layers
and now it returns only the output
[184430107165680F00] [3.3] [89.777] [NeuralNetwork(0)] [warning] Input image (300x300) does not match NN (3x300)
Model: MobileNetV2
I am getting the same error and I tried the above two options which you have mentioned, but no luck.
I am assuming that it is just a warning, will it affect in any way.
heebahsaleem
It will not work.
You need to change:
- either the Openvino version you are using (trial and error), they seem to change their default shapes between versions.
- when compiling the blob with blobconverter, use these https://docs.openvino.ai/2022.3/openvino_inference_engine_tools_compile_tool_README.html parameters (primarily
-il
) to set the desired shape.
Thanks,
Jaka
I'm having a similiar issue
[18443010D18F411300] [2.2] [9.560] [DetectionNetwork(1)] [warning] Input image (244x244) does not match NN (3x244)
My blob conversion:
blob_path = blobconverter.from_tf(
frozen_pb="./frozen_graph.pb",
data_type="FP16",
shaves=6,
optimizer_params=[
f"--input_shape=[1,{SHAPE},{SHAPE},3]"
]
)
I'm confused about how I should modify this blob_path to make sure it outputs the desired shape/right convention @jakaskerl
- Edited
jakaskerl
thank you for your reply.
- what is trial and error Openvino version? I am using openvino 2022.2 version.
- I am using blobconverter app to convert [https://blobconverter.luxonis.com/]. How to use the one you have mentioned?
FYI I am using TF model
Your help will be highly appreciated. TIA
Hi heebahsaleem
- Trial and error is referring to the process of finding the right Openvino version for your model to work. Basically you would have to compile the model with each Openvino version, untill you find the right one (perhaps 2021.4 could work right away), hence the "trial and error".
- Inside blobconverter, you have "advanced" tab which allows you to select the number of shaves you are using, and pass in additional parameters. Under "Compile parameters:" you should be able to pass in the -il parameter and the shape you want.
userOfCamera
According to README I sent above the shapes should be defined as either "NCHW" or "NHWC". So the compiler knows how to reshape the input.
Thanks,
Jaka
@jakaskerl
I set --layout to both nchw and nhwc but the same issue persists
blob_path = blobconverter.from_tf(
frozen_pb="./frozen_graph.pb",
data_type="FP16",
shaves=6,
optimizer_params=[
f"--input_shape=[1,{SHAPE},{SHAPE},3]",
"--layout=nhwc"
]
)
Am I supposed to set this using the -il flag? Can you show me an example
This seemed to solve the problem for me (setting --layout = nhwc->nchw)
My NN doesn't seem to be working on the Camera, however this could be due to something else (to be investigated)
Here is my full blob_path
blob_path = blobconverter.from_tf(
frozen_pb="./frozen_graph.pb",
data_type="FP16",
shaves=6,
optimizer_params=[
f"--input_shape=[1,{SHAPE},{SHAPE},3]",
"--layout=nhwc->nchw"
]
)