Tanisha
Can you try adding the following flags to model optimizer:
--mean_values [123.675,116.28,103.53] \
--scale_values [58.395,57.12,57.375] \
This is ImageNet mean and scale multiplied by 255. If you also add --reverse_input_channels
model should expect BGR 0-255 images.
To make sure the normalization is correct, you can try installing openvino-dev==2022.3
first, and calling model optimizer yourself, like:
mo.py \
--input_model model.onnx \
--model_name segmentation_model \
--data_type FP16 \
--output_dir output_dir \
--input_shape [1,3,1088,1088] \
--mean_values [123.675,116.28,103.53] \
--scale_values [58.395,57.12,57.375] \
--reverse_input_channels
This will produce OpenVINO IR, which is intermediate representation.
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(model=model_xml, weights=model_bin)
input_blob = next(iter(net.input_info))
exec_net = ie.load_network(network=net, device_name='CPU')
img = cv2.imread("img.png") # make sure image is of correct shape
image = img.astype(np.float32)
image = np.expand_dims(image, axis=0)
image = np.moveaxis(image, 3, -3)
output = exec_net.infer(inputs={input_blob: image})
You can then find the output and post-process it in the same manner as you would otherwise. This basically takes the exported model in .xml and .bin (intermediate representation before blob) and runs it on your CPU.
Can you let me know what is the result of the above?