• DepthAI-v2
  • Train object detector for OAK with TFLite Model Maker

I'm not sure if anyone has. But in case you haven't tried it yet (and sorry if you have, and it's mentioned in that thread and I missed it), have you tried out PINTO0309's tflite2tensorflow converter?

https://github.com/PINTO0309/tflite2tensorflow

And of anyone to know how to do the conversion from the TFLite Model Maker, I'm guessing it's PINTO0309.

I'll bring it up to him in our Discord (https://discord.gg/EPsZHkg9Nx) in the #ai_ml_cv channel to see if he's perhaps done anything with this yet.

Thanks,
Brandon

I haven't tried Model Maker yet, but I have seen a Japanese engineer try it. However, they have not tried to convert tflite to openvino. I don't think there is a big difference in the structure of the model. If you can provide a sample model file, I can try to convert it right away. From your information, we do not know what kind of error you are experiencing or what kind of problem you are experiencing.

Thanks for the quick reply. After training the model I tried to convert it using the following code:

Export model as saved model:

model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL)

Install OpenVINO:

import os
from urllib.parse import urlparse

## install tools. Open Vino takes some time to download - it's ~400MB
!sudo apt-get install -y pciutils cpio
!sudo apt autoremove

## downnload installation files
url = "https://registrationcenter-download.intel.com/akdlm/irc_nas/17662/l_openvino_toolkit_p_2021.3.394.tgz"
!wget {url}

## Get the name of the tgz
parsed = urlparse(url)
openvino_tgz = os.path.basename(parsed.path)
openvino_folder = os.path.splitext(openvino_tgz)[0]

## Extract & install openvino
!tar xf {openvino_tgz}
%cd {openvino_folder}
!./install_openvino_dependencies.sh && \
    sed -i 's/decline/accept/g' silent.cfg && \
    ./install.sh --silent silent.cfg

Convert model:

output_dir = '/content/output'

!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model /content/saved_model/saved_model.pb \
    --transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
    --reverse_input_channels \
    --output_dir {output_dir} \

Output:

[setupvars.sh] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/content/saved_model/saved_model.pb
	- Path for generated IR: 	/content/output
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Use the config file: 	None
	- Inference Engine found in: 	/opt/intel/openvino_2021/python/python3.7/openvino
Inference Engine version: 	2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Model Optimizer version: 	    2021.3.0-2787-60059f2c755-releases/2021/3
2021-06-18 07:33:54.531810: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[ WARNING ]  
Detected not satisfied dependencies:
	test-generator: not installed, required: == 0.1.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/content/saved_model/saved_model.pb" is incorrect TensorFlow model file. 
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. 
 For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)

I uploaded the model on Discord in the #ai_ml_cv channel.

The conversion was successful.

tflite2tensorflow \
--model_path model_fp32.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb \
--optimizing_for_openvino_and_myriad

saved_model_to_tflite \
--saved_model_dir_path saved_model \
--output_no_quant_float32_tflite

mv saved_model saved_model_bk
cp tflite_from_saved_model/model_float32.tflite .

tflite2tensorflow \
--model_path model_float32.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb \
--optimizing_for_openvino_and_myriad

tflite2tensorflow \
--model_path model_float32.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_openvino_and_myriad


#### json edit

1. update
      "inputs": [
        0
      ],
      "outputs": [
        730,
        732,
        727,
        728
      ],
↓
      "inputs": [
        0
      ],
      "outputs": [
        730,
        732,
        727
      ],

2. delete
        {
          "opcode_index": 5,
          "inputs": [
            726,
            2
          ],
          "outputs": [
            728
          ],
          "builtin_options_type": "NONE",
          "custom_options_format": "FLEXBUFFERS"
        },

mv model_float32.tflite model_float32_org.tflite
../flatc -o . -b ../schema.fbs model_float32.json

tflite2tensorflow \
--model_path model_float32.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb \
--optimizing_for_openvino_and_myriad

tflite2tensorflow \
--model_path model_float32.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_openvino_and_myriad

#### .xml edit saved_model/openvino/FP16/saved_model.xml

1. update
		<layer id="1481" name="lambda/NonMaxSuppressionV4" type="NonMaxSuppression" version="opset5">
			<data box_encoding="corner" output_type="i32" sort_result_descending="true"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>37629</dim>
					<dim>4</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1</dim>
					<dim>37629</dim>
				</port>
↓
		<layer id="1481" name="lambda/NonMaxSuppressionV4" type="NonMaxSuppression" version="opset5">
			<data box_encoding="corner" output_type="i32" sort_result_descending="false"/>
			<input>
				<port id="0">
					<dim>1</dim>
					<dim>37629</dim>
					<dim>4</dim>
				</port>
				<port id="1">
					<dim>1</dim>
					<dim>1</dim>
					<dim>37629</dim>
				</port>

mkdir -p saved_model/openvino/myriad

${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/myriad_compile \
-m saved_model/openvino/FP16/saved_model.xml \
-ip U8 \
-VPU_NUMBER_OF_SHAVES 4 \
-VPU_NUMBER_OF_CMX_SLICES 4 \
-o saved_model/openvino/myriad/model.blob