Hi there,

I have a custom stereo depth network (Unimatch) that I would like to deploy on my OAK-D Pro W (RCV2).
As the Blobconverter states, RCV2 only supports OpenVINO opset_version 8 or older (OpenVINO 2022.1).

Now, when I try to compile my traced torch model to ONNX with opset8, I get a warning, that the behavior of upsample layers has changed and that I should use opset_version 11 or newer:
UserWarning: You are trying to export the model with onnx:Upsample for ONNX opset version 8. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearestmode).
We recommend using opset 11 and above for models using this operator.

Also, I get the error that upsample_bilinear2d is not implemented in opset_version 8:
Unsupported: ONNX export of operator upsample_bilinear2d, align_corners == True. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues

All this suggests to me that I have to compile the model in opset_version 11, which works fine. But of course the blob converter doesn't like that and gives me the Error:
Cannot create Interpolate layer /Resize id:18 from unsupported opset: opset11

I have tried manually adjusting the OpenVINO IR by in the XML changing the opset versions of the type="Interpolate" layers (the only layers with opset > 8) to opset_version4 (layer version before 11) and manually adding input ports to match the old layer specification. I have also added missing constants to the .bin file:

{sorry for the wrapping lines}

This doesn't work, though, I am now stuck at this Blobconverter error, which as opposed to previous errors I cannot parse:
Check 'i < m_inputs.size()' failed at core/src/node.cpp:451:
index '2' out of range in get_input_element_type(size_t i)

And even if it did work, it would probably yield unpredictable results…

Now to my question:
I really like the idea of running my depth inference on the camera, and according to this chart and my model characteristics, I can expect almost 10 fps, which would be impressive and very much usable. However, the outdated supported opset_version holds me back from this, which I find quite sad. Does anyone of you have an idea, how I can still compile the unimatch model to opset_version 8 and deploy it on the OAK?

Any help would be greatly appreciated!
Cheers,
Leonard

    Hi @LeonardFreissmuth,

    I'd recommend to you to try our new conversion library called ModelConverter, which, I believe, uses OpenVINO version 2022.3.0, but you need to set the keep_intermediate_outputs argument to True so that you'll get also blobs, because by default ModelConverter compiles the blobs into a superblob which isn't compatible with DepthAI v2.

    Another option is to use even newer version of OpenVINO (>= 2022.3.0) to export the ONNX model into the IR using their model optimizer (mo command) and then use either blobconverter to obtain the final blob.

    With regards,
    Jan

    8 days later

    Hi @JanCuhel,

    Thanks for the tips, I now got back to it and built the docker image. If I run it with this configuration, I get further (I suppose). The model optimizer concludes with no non-available layers and produces an OpenVINO IR.

    However, now I get the error that FP64 is not supported when running the compile tool. This is odd to me, as compressing the model to fp16 is one thing I specifically tell the model optimizer to do in the yaml:

    name: unimatch
    input_model: shared_with_container/models/unimatch.onnx
    
    layout: HW
    keep_intermediate_outputs: true
    disable_onnx_simplification: false
    
    inputs:
      - name: onnx::Cast_0
        shape: [800, 1280]
        data_type: uint8
      - name: onnx::Cast_1
        shape: [800, 1280 ]
        data_type: uint8
          
    rvc2:
        number_of_shaves: 8
        number_of_cmx_slices: 8
        mo_args: []
        compile_tool_args: []
        superblob: false
        compress_to_fp16: true

    The call to the compile tool is:

    /opt/intel/tools/compile_tool/compile_tool -d MYRIAD -ip U8 -m shared_with_container/outputs/unimatch_to_rvc2_2024_12_11_21_12_53/intermediate_outputs/unimatch-simplified.xml -o                                                                          subprocess.py:37
             shared_with_container/outputs/unimatch_to_rvc2_2024_12_11_21_12_53/unimatch-rvc2.blob -c /tmp/tmpljx_d_ou.conf

    And this is the error message:

    ERROR    Encountered an exception in the conversion process!    
      [... some boilerplate python backtrace, let me know if you need it ...]       
             SubprocessException: Command `/opt/intel/tools/compile_tool/compile_tool` finished in 0.70 seconds with return code 1.                                                                                                                                                                
             [ STDERR ]:                                                                                                                                                                                                                                                                           
             FP64 isn't supported                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                   
             [ STDOUT ]:                                                                                                                                                                                                                                                                           
             OpenVINO Runtime version ......... 2022.3.0                                                                                                                                                                                                                                           
             Build ........... 2022.3.0-9213-bdadcd7583c-releases/2022/3                                                                                                                                                                                                                           
             Network inputs:                                                                                                                                                                                                                                                                       
                 onnx::Cast_0 : u8 / [...]                                                                                                                                                                                                                                                         
                 onnx::Cast_1 : u8 / [...]                                                                                                                                                                                                                                                         
             Network outputs:                                                                                                                                                                                                                                                                      
                 3379/sink_port_0 : f16 / [...]  

    And indeed, if I look into the generated XML from the OpenVINO IR, there are still a bunch of FP64 precision ports even in layers that are not of type "Convert".

    Do you have an idea how this can be, i.e. why the model optimizer does not convert all layers to FP16?

    Thanks already!
    Leonard

    Hi @LeonardFreissmuth,

    I apologize for the delay in my response. Could you please share your model with me so that I could take a look?

    Kind regards,
    Jan