I am trying to convert .pb tensorflow model to myriadx blob and I faced this error message:
Command failed with exit code 1, command: /app/venvs/venv2022_1/bin/python /app/model_compiler/openvino_2022.1/converter.py --precisions FP16 --output_dir /tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0 --download_dir /tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0 --name saved_model --model_root /tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0
Console output (stdout):
`========== Converting saved_model to IR (FP16)
Conversion command: /app/venvs/venv2022_1/bin/python -- /app/venvs/venv2022_1/bin/mo --framework=tf --data_type=FP16 --output_dir=/tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0/saved_model/FP16 --model_name=saved_model --input= --data_type=FP16 '--mean_values=[0,0,0]' '--scale_values=[255,255,255]' --input_model=/tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0/saved_model/FP16/saved_model.pb
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0/saved_model/FP16/saved_model.pb
- Path for generated IR: /tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0/saved_model/FP16
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Source layout: Not specified
- Target layout: Not specified
- Layout: Not specified
- Mean values: [0,0,0]
- Scale values: [255,255,255]
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
OpenVINO runtime found in: /opt/intel/openvino2022_1/python/python3.8/openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
FAILED:
saved_model`
Error output (stderr):
`[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/tmp/blobconverter/5c4f5e793b574a1eb6b5619683a86cd0/saved_model/FP16/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
frozen graph in text or binary format
inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)`
I tried with pretrained-model and get the same output. Anyone can help me?
Thanks.