Andrevancalster
Thanks, this makes more sense to me now. The generated ONNX is quite complex. You would have to use onnx simplifier to simplify it.
Furthermore, MobileNetV3 uses HardSigmoid operation which is not supported out of the box. You can replace it with lower operations which OpenVINO won't merge into HardSigmoid and will thus allow the export. Below is PyTorch implementation, and you would have to replace all HardSigmoid use cases in the architecture before exporting to ONNX.
def hard_sigmoid(x, inplace=False):
return F.relu6(x + 3, inplace) / 6
class HS(torch.nn.Module):
def forward(self, x):
return hard_sigmoid(x)
Then, the head contains GatherND operation and "if" statement, which you would have to prune away. You would likely have to specify output flag --output concat_name,softmaxname
to model optimizer field in blobconverter. Post-processing of results would then have to be performed on host (NMS and filtering of the topk predictions).
You see you would have to do quite some work to be able to deploy this. Instead, it would probably be easier to take our YoloV8 training notebook and adapt the initial sections to use your dataset. Export should be rather straightforward afterwards, and you can use DepthAI's YoloDetectionNetwork on device later. Let me know if this helps.