onnx to blob conversion failure
@jakaskerl
the model was created and trained in PyTorch. it is a detection model (retinanet_resnet50_fpn), then conveted to onnx.
@Matja @jakaskerl
I was able to convert my onnx model to openvino (.xml & .bin). but I am still unable to convert the openvino model to .blob with online blobconverter. I get error:
`Conversion error
Error message
Command failed with exit code 1, command: /opt/intel/openvino2022_1/tools/compile_tool/compile_tool -m /tmp/blobconverter/eee906938ab54ad8b958fa86c51bbd16/model_final_epoch_39-step_145680/FP16/model_final_epoch_39-step_145680.xml -o /tmp/blobconverter/eee906938ab54ad8b958fa86c51bbd16/model_final_epoch_39-step_145680/FP16/model_final_epoch_39-step_145680.blob -c /tmp/blobconverter/eee906938ab54ad8b958fa86c51bbd16/myriad_compile_config.txt -d MYRIAD -ip FP16
Console output (stdout)
OpenVINO Runtime version ......... 2022.1.0
Build ........... 2022.1.0-7019-cdb9bec7210-releases/2022/1
Error output (stderr)
Cannot create NonMaxSuppression layer NonMaxSuppression_2382 id:25 from unsupported opset: opset9
`
do you have any suggestion on how to handle this?
Yes, non maximum suppression (NMS) would have to be handled on host. You can inspect the ONNX in netron.app, find the layer before the NMS, and pass it as argument in Model Optimizer parameters as --output layer_name
. You can then use OpenCV NMS function on host. Alternatively, you can find a more intuitive explanation of NMS here. Would this work for you?
For Yolo models, we perform NMS on device itself, but we don't expose it as a node yet.
I am having the same problem as wendbb: I have built a pytorchlightning custom object detector on the basis of ssdlite320_mobilenet_v3_large and converted the model to onnx. The luxonis blob converter throws the same error: Accessing out-of-range dimension in Dimension[]. It has to be pointed out that the model optimizer did not throw any error. The error occurs during the blob conversion. I also ran a local model optimizer (version openvino-dev 2022.3) on the onnx file. Blob conversion gave the same error: NonMaxSuppression - opset9. This is due to the version of the model optimizer used. (with python 10 one can only install openvino-dev 2022.3 and higher). This version includes opset9. I edited the xml file and replaced opset9 by opset5 yielding the same error as before: out-of-range dimension. Any suggestion to solve this issue is more than welcome.
Can you please share the .xml file or ONNX? As mentioned, you would have to cut away the NMS node and do it on host.
Alternative option is to try downgrading the version (or using an older version in blobconverter).
Thanks for your reply. I copied 2 versions of my onnx file to my Dropbox/pyblic file https://www.dropbox.com/scl/fo/bgx7wvcukc538xxkn6w1o/ACFeOnNZzTwmOp6dJ6rWmgI?rlkey=yotqxxs9tmhxffr7z6coqqgqz&st=16fwnrbl&dl=0.
I already tried an older version of blobconverter. However I had errors during model optimizer.
Some extra info: I downloaded resnet34-ssd1200.onnx from open_model_zoo and converted the file to xml using openvino-dev 2022.3. To avoid non-supported NMS-opset9, I replaced opset9 with opset8 in the xml file and it converted flawless to the corresponding blob file. So I guess that the blob conversion error "out-of-range dimension in Dimension[]" of my "ssdlite320_mobilenet_v3_large.onnx" or "ssdlite320_mobilenet_v3_large.xml" is related to another layer - node.
Is this based on some open source code implementation that I could take a look at?
thx for your reply. I made a copy of my jupyter notebook ptl_objectdetection.ipynb available on my public dropbox file https://www.dropbox.com/scl/fo/bgx7wvcukc538xxkn6w1o/ACFeOnNZzTwmOp6dJ6rWmgI?rlkey=yotqxxs9tmhxffr7z6coqqgqz&st=d5xggfcc&dl=0
- Edited
I noticed that my public dropbox file is not accessible. Please go to my github: pytorch link
Thanks, this makes more sense to me now. The generated ONNX is quite complex. You would have to use onnx simplifier to simplify it.
Furthermore, MobileNetV3 uses HardSigmoid operation which is not supported out of the box. You can replace it with lower operations which OpenVINO won't merge into HardSigmoid and will thus allow the export. Below is PyTorch implementation, and you would have to replace all HardSigmoid use cases in the architecture before exporting to ONNX.
def hard_sigmoid(x, inplace=False):
return F.relu6(x + 3, inplace) / 6
class HS(torch.nn.Module):
def forward(self, x):
return hard_sigmoid(x)
Then, the head contains GatherND operation and "if" statement, which you would have to prune away. You would likely have to specify output flag --output concat_name,softmaxname
to model optimizer field in blobconverter. Post-processing of results would then have to be performed on host (NMS and filtering of the topk predictions).
You see you would have to do quite some work to be able to deploy this. Instead, it would probably be easier to take our YoloV8 training notebook and adapt the initial sections to use your dataset. Export should be rather straightforward afterwards, and you can use DepthAI's YoloDetectionNetwork on device later. Let me know if this helps.
thx for your reply, i will look into it.