Thanks for the quick reply. After training the model I tried to convert it using the following code:
Export model as saved model:
model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL)
Install OpenVINO:
import os
from urllib.parse import urlparse
## install tools. Open Vino takes some time to download - it's ~400MB
!sudo apt-get install -y pciutils cpio
!sudo apt autoremove
## downnload installation files
url = "https://registrationcenter-download.intel.com/akdlm/irc_nas/17662/l_openvino_toolkit_p_2021.3.394.tgz"
!wget {url}
## Get the name of the tgz
parsed = urlparse(url)
openvino_tgz = os.path.basename(parsed.path)
openvino_folder = os.path.splitext(openvino_tgz)[0]
## Extract & install openvino
!tar xf {openvino_tgz}
%cd {openvino_folder}
!./install_openvino_dependencies.sh && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh --silent silent.cfg
Convert model:
output_dir = '/content/output'
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
--input_model /content/saved_model/saved_model.pb \
--transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
--reverse_input_channels \
--output_dir {output_dir} \
Output:
[setupvars.sh] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /content/saved_model/saved_model.pb
- Path for generated IR: /content/output
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
- Inference Engine found in: /opt/intel/openvino_2021/python/python3.7/openvino
Inference Engine version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Model Optimizer version: 2021.3.0-2787-60059f2c755-releases/2021/3
2021-06-18 07:33:54.531810: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[ WARNING ]
Detected not satisfied dependencies:
test-generator: not installed, required: == 0.1.1
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/content/saved_model/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
I uploaded the model on Discord in the #ai_ml_cv channel.