• Issues converting YoloV8 to blob.

I have the following issue: I'm trying to convert my OpenVINO model to a blob to use it with my OAK cameras, and I'm getting the following output:

Error Message:

Command failed with exit code 1, command: /app/venvs/venv2021_4/bin/python /app/model_compiler/openvino_2021.4/converter.py --precisions FP16 --output_dir /tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f --download_dir /tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f --name patentesv1 --model_root /tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f

Console output (stdout):

========== Converting patentesv1 to IR (FP16)

Conversion command: /app/venvs/venv2021_4/bin/python -m mo --framework=onnx --data_type=FP16 --output_dir=/tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f/patentesv1/FP16 --model_name=patentesv1 --data_type=FP16 '--mean_values=[127.5,127.5,127.5]' '--scale_values=[255,255,255]' --input_model=/tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f/patentesv1/FP16/patentesv1.onnx

Model Optimizer arguments:

Common parameters:

- Path to the Input Model: 	/tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f/patentesv1/FP16/patentesv1.onnx

- Path for generated IR: 	/tmp/blobconverter/bfcddb9b17ed42a788bd02adc1784b3f/patentesv1/FP16

- IR output name: 	patentesv1

- Log level: 	ERROR

- Batch: 	Not specified, inherited from the model

- Input layers: 	Not specified, inherited from the model

- Output layers: 	Not specified, inherited from the model

- Input shapes: 	Not specified, inherited from the model

- Mean values: 	[127.5,127.5,127.5]

- Scale values: 	[255,255,255]

- Scale factor: 	Not specified

- Precision of IR: 	FP16

- Enable fusing: 	True

- Enable grouped convolutions fusing: 	True

- Move mean values to preprocess section: 	None

- Reverse input channels: 	False

ONNX specific parameters:

[ WARNING ] Failed to import Inference Engine Python API in: PYTHONPATH

[ WARNING ] libpython3.6m.so.1.0: cannot open shared object file: No such file or directory

[ WARNING ] Failed to import Inference Engine Python API in: /opt/intel/openvino2021_4/python/python3.8

[ WARNING ] libpython3.6m.so.1.0: cannot open shared object file: No such file or directory

[ WARNING ] Could not find the Inference Engine Python API. At this moment, the Inference Engine dependency is not required, but will be required in future releases.

[ WARNING ] Consider building the Inference Engine Python API from sources or try to install OpenVINO (TM) Toolkit using "install_prerequisites.sh"

Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4

FAILED:

patentesv1

Error output (stderr):

[ ERROR ] Cannot infer shapes or values for node "/model.22/dfl/Reshape".

[ ERROR ] Number of elements in input [ 1 34 8400] and output [1, 4, 16, 8400] of reshape node /model.22/dfl/Reshape mismatch

[ ERROR ]

[ ERROR ] It can happen due to bug in custom shape infer function <function Reshape.infer at 0x7f5c9d532790>.

[ ERROR ] Or because the node inputs have incorrect values/shapes.

[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.

[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "/model.22/dfl/Reshape" node.

For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

More context:

  • I have the same model working with yolov5s.
  • When I try to convert the file yolov8s.pt, I can do it without any problems.
  • My current model, which is causing issues, is based on yolov8s.
  • The tool I'm using is https://blobconverter.luxonis.com/.
  • I've always used http://tools.luxonis.com/, but this time it's generating an error and not showing any information.

Any recommendations?

Thank you.

    Hi MarianoGarcaMatto
    Looks like there is a shape mismatch:

    MarianoGarcaMatto [ ERROR ] Number of elements in input [ 1 34 8400] and output [1, 4, 16, 8400] of reshape node /model.22/dfl/Reshape mismatch

    Could you revisit your model architecture and check if some reshape layer/operation has inconsistent I/O that would cause this error.

    Thanks,
    Jaka