Hi @seanmabli,
I apologize for the delay in our response. I see from the code you shared that you are setting the model type as a caffe
, which is incorrect because you are trying to convert a Tensorflow model. However, even when I set the correct model type, I obtained the following error:
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/tmp/blobconverter/ac6cd1d00994439f80bf19aca9531512/saved_model/FP16/saved_model.pb" is incorrect TensorFlow model file. The file should contain one of the following TensorFlow graphs: 1. frozen graph in text or binary format 2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format 3. meta graph Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
It seems that the model format is incorrect. If you haven't done so, I'd recommend that you try to freeze your model. Here's a tutorial on how to do it. Also, I'd recommend using our web app or blobconverter cli package for more user-friendliness.
Here's an example of how to use the blobconverter cli package to convert a Tensorflow model:
import blobconverter
blob_path = blobconverter.from_tf(
frozen_pb="/path/to/deeplabv3_mnv2_pascal_train_aug.pb",
data_type="FP16",
shaves=5,
optimizer_params=[
"--reverse_input_channels",
"--input_shape=[1,513,513,3]",
"--input=1:mul_1",
"--output=ArgMax",
],
)
Best,
Jan