Hi JanCuhel,
Thank you for your suggestion! That seems make me closer to my milestone, but I still encountered a few problems.
I was directly converting my saved model as my first try, and failed:
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % cd /Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % ls
assets lim_det_model.onnx saved_model.pb
efficientdet-d0.bin model.onnx tf_model_inference.ipynb
efficientdet-d0.xml modified_model.onnx variables
fingerprint.pb saved_model.onnx
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=saved_model.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message with type 'tensorflow.GraphDef'.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
Then I guess I need to make my saved model a frozen one, according to the error message I received. Following the instructions from OpenVINO's official documentation seems not work, so I run a special Python script to generate a frozen graph:
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
# Load the saved model
model = tf.saved_model.load("/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model")
infer = model.signatures["serving_default"]
# Determine input shape and dtype from the loaded model
input_name = list(infer.structured_input_signature[1].keys())[0]
input_shape = infer.structured_input_signature[1][input_name].shape
input_dtype = infer.structured_input_signature[1][input_name].dtype
# Convert the model to a concrete function
concrete_func = tf.function(infer).get_concrete_function(
tf.TensorSpec(input_shape, input_dtype)
)
# Convert the concrete function to a frozen ConcreteFunction
frozen_concrete_func = convert_variables_to_constants_v2(concrete_func)
# Extract the GraphDef from the ConcreteFunction
frozen_graph_def = frozen_concrete_func.graph.as_graph_def()
# Save the frozen graph
with open("/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model/frozen_graph.pb", "wb") as f:
f.write(frozen_graph_def.SerializeToString())
Again I replicated your command line and expected it would succeed, both of my tries failed.
The first one:
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=frozen_graph.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
2023-08-15 01:03:29.274017: E tensorflow/core/framework/node_def_util.cc:630] NodeDef mentions attribute resize_if_index_out_of_bounds which is not in the op definition: Op<name=TensorListSetItem; signature=input_handle:variant, index:int32, item:element_dtype -> output_handle:variant; attr=element_dtype:type> This may be expected if your graph generating binary is newer than this binary. Unknown attributes will be ignored. NodeDef: {{node StatefulPartitionedCall/StatefulPartitionedCall/map/while/body/_1405/map/while/TensorArrayV2Write/TensorListSetItem}}
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node StatefulPartitionedCall/StatefulPartitionedCall/map/while/body/_1405/map/while/Preprocessor/ResizeToRange/strided_slice_2/stack.
Original exception message: index -1 is out of bounds for axis 0 with size 0
[ INFO ] You can also try to use new TensorFlow Frontend (preview feature as of 2022.3) by adding `--use_new_frontend` option into Model Optimizer command-line.
Find more information about new TensorFlow Frontend at https://docs.openvino.ai/latest/openvino_docs_MO_DG_TensorFlow_Frontend.html
Second one (this time I just added something according to the above error message):
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=frozen_graph.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]" --use_new_frontend
[ ERROR ] Legacy extensions are not supported for the new frontend
Since the versions of TF and OpenVINO are same as what you recommended, I wonder if there are any inexplicit factors making my situation so complicated. Could you please try doing this using my saved_model file if possible: link. I just want to ensure if I am working towards a correct direction. Otherwise I might go train a new model.
Regards,
Austin