I am newbie to "Luxonis" space .I have custom trained object detection tensorflow 2.14 saved model (.pb) , converted to openvino(v 2023.2) IR format(xml,bin files) , however I can't find a way to convert this IR model into MYRIAD blob format , Am I missing something? Is it not supported yet ? if not supported is there any plans to support ? or can you please help me with alternate approach to convert to blob format ?

    Yes I did try that but no luck, Getting the error

    Cannot create Interpolate layer map/while/Preprocessor/ResizeImage/resize/ResizeBilinear id:59 from unsupported opset: opset11

    When I use 2022.1 I can't even convert tensorflow saved model to OpenVino (bin/xml) format

    When I use Tensorflow model directly with blobconvertor I get the below error:

    [ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/tmp/blobconverter/c920b1a8d5964296b9bb4b7999d5452a/saved_model/FP16/saved_model.pb" is incorrect TensorFlow model file.
    The file should contain one of the following TensorFlow graphs:
    1. frozen graph in text or binary format
    2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
    3. meta graph

    Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
    For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)

      The support for MyriadX was deprecated in 2023 OpenVINO versions, so unfortunately it's not possible to use 2023. You should try using 2022.3 or 2022.1. You can use model optimizer by installing `openvino-dev==2022.3`, then upload the XML and BIN that you obtain to the blobconverter.

      You should follow the link from above to see what TF model formats are supported in 2022.

      Amruth

      Appreciate the response.

      When I try with 2022.3 I get this exception when running mo command. However 2023.1 no issues.
      [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/Slice_3/begin.

      Original exception message: index -1 is out of bounds for axis 0 with size 0

      "The support for MyriadX was deprecated in 2023 OpenVINO versions" Does that mean I can't use models optimized with openvino 2023.1 with any OKA-D devices?
      Is there any other way to generate MyriadX blob format for these devices since 2022 versions not working? any suggestions?

        Amruth

        Unfortunately it is not possible to use OpenvINO 2023.

        Can you let me know the architecture of the model? If it's some kind of classification approach with Softmax at the end, you could try removing the softmax when saving the model and export without it. Then simply perform softmax on host after receiving the outputs from the camera.

          a month later

          Matija
          so there is no other way to convert IR to the blob using openvino version 2023?
          I am using Openvino 2023.2

            heebahsaleem

            No, 2022.3 is the latest supported version. There might be some flags in model optimizer of 2023 version like --use_legacy_frontend. If it's there, you can try using it and then uploading xml and bin to our blobconverter, but it might not succeed.

            3 months later

            @Matija

            I'm also experiencing a similar issue when trying to convert a UNet variant with the online blob converter tool.

            [ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node model_1/sequential_24/conv2d_transpose_9/strided_slice/stack. Original exception message: index -1 is out of bounds for axis 0 with size 0

            AFAICT this references the first transpose conv layer in the decoder after the bottleneck. Do you have any other thoughts on workarounds where the layer in conflict is deeper into the network?

            Thanks so much for your help.

              rsinghmn

              Does specifying input shape help?

              If not, would it be possible to share your process and model (can be untrained) with us?

              Thanks for the quick reply! Sure, here's a link to the files to reproduce:

              Specifying the input helped resolve a previous issue I was having. Here are the input parameters I'm setting in the online converter tool:
              Model parameters: --data_type=FP16 --input_shape=[1,768,768,1]

              Compile parameters: -ip FP16

              FWIW, I did some more testing and found that trying to convert UNets from the segmentation models package also fails similarly when specifying a 'transposed' decoder block type; UNets with 'upsampling' decoder block types convert just fine.

              I also tried converting a single transpose conv2d layer model (included in attached) and that fails similarly.

              5 days later