Hi @liu and @barney2074 ,
I apologize for the inconvenience. Due to different ML frameworks and frequent updates, tools and models can slightly vary, so it's hard to keep a consistent stack at all times. That said, our goal is to build a platform that will make this as easy as possible and convenient for the user, and most importantly, follow the motto "robotic vision made simple". We partially started this with https://tools.luxonis.com/, where you can generate a blob and JSON for YoloV5 model directly from the .pt weights. We want to develop more such tools that will require little to no coding from the customer, to make this process easier for customer as well as for us. This is still in alpha version, but feel free to give it a try @liu . Any feedback is more than welcome 🙂
For custom models, conversion from Pytorch is typically easier, as you always have to go Pytorch -> ONNX -> OpenVINO IR -> blob. With TF, you can go directly to OpenVINO IR, but it depends what version of TF you're using. There are plenty of options for saving/exporting TF models as well, which makes this a bit more complicated. If you are using TF1, you need to generate a frozen .pb file or use a saved model. Then you can directly use model optimizer to generate the IR. For TF2, you are required to used the SavedModel format. For better explanation you can check OpenVINO 2021.4 docs - note that 2022.1 is not yet supported. The easiest way would be to pip install openvino-dev==2021.4.2
, generating IR by providing --saved_model_dir path/to/saved/dir
where dir
includes .pb, assets, and variables. After obtaining IR, you can use blobconverter to compile a blob from OpenVINO IR. If the model is in NHWC format, then you'll most likely have to provide --layout "nhwc->nchw"
flag to model optimizer as well.
Colab tutorials are designed in a way to show you the whole process - setting up the environment, training, and deployment. We are trying to keep them updated, but as new versions of libraries are being released, it's hard to keep them updated at times. That said, if you find a certain bug, please bring it up so we can fix it. Saying "it fails for one reason or another" does not help us improve documentation, tutorials, or tools in any way.
We are also trying to do our best helping customers with the whole export and compilation process here on Discuss, Discord, and even via email. But to do so, we need to know the model you are trying to convert, version of the tools you are using, what commands are you calling, what are the various errors you are getting, ... So, please share the exact problems you are having @barney2074 so that we can look into it, fix it, and make it easier for everyone 🙂
Best,
Matija