I have received recently my AI KIT , I am trying to start with popular MNIST dataset and model for handwrite digit detection. any tutorial or sample code available for convert existing mint.h5 model compatible with my OAK - I. I want finally use it for multiple digit detection with bounding box. please give some idea to start this. Thanks
convert existing mint.h5 model compatible with OAK - I
- Edited
It is really advisable to use this example to train on a dataset, in this case MNIST - https://github.com/luxonis/depthai-ml-training/blob/master/colab-notebooks/Easy_TinyYOLOv4_Object_Detector_Training_on_Custom_Data.ipynb
However, some examples are too large to train on and if you do not wish to train it all over again, you'll have to convert the model.h5
file to OpenVINO format, in terms of xml
and bin
. To convert that follow this - https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#keras_h5
Alternative source - https://www.dlology.com/blog/how-to-run-keras-model-inference-x3-times-faster-with-cpu-and-intel-openvino-1/
Once you have the xml
and bin
file, converting to blob is extremely easy using this official doc - https://docs.luxonis.com/projects/api/en/gen2_develop/tutorials/local_convert_openvino/
You could use the converted blob
file in a custom script to perform inference. You can share updates or errors faced during conversion, and we'd help in solving that.
Rest the process goes this way -
model.h5 ------> xml and bin ------> blob file --------> Use model in custom script
HI,
Thanks for helping to solve this issue.
I was trying to find a solution with your suggestions.
Unluckily I failed. I am getting following error , I am using macOS Big Sur 11.2.1 (Intel)
nvcc -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=[sm_50,compute_50] -gencode arch=compute_52,code=[sm_52,compute_52] -gencode arch=compute_61,code=[sm_61,compute_61] -Iinclude/ -I3rdparty/stb/include -DOPENCV pkg-config --cflags opencv
-DGPU -I/usr/local/cuda/include/ -DCUDNN --compiler-options "-Wall -Wfatal-errors -Wno-unused-result -Wno-unknown-pragmas -fPIC -Ofast -DOPENCV -DGPU -DCUDNN -I/usr/local/cudnn/include" -c ./src/convolutional_kernels.cu -o obj/convolutional_kernels.o
nvcc fatal : Unsupported gpu architecture 'compute_30'
Makefile:162: recipe for target 'obj/convolutional_kernels.o' failed
make: *** [obj/convolutional_kernels.o] Error 1
chmod: cannot access './darknet': No such file or directory
This is the dataset I am trying to convert blob file.
!curl -L "https://app.roboflow.com/ds/JbWNa3X1Kt?key=mRgibPHcON" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
Can you please create a blob file with this? (if Mac Not support this)
rajeshktym Interesting. I'm not sure. But in our Discord I think we have some folks who could probably help. It does look like the problem is the nvcc compiler. I'm guessing it doesn't work on Mac, given that it is trying to do CUDNN access/etc.
We do the darknet conversion in Colab and on our example scripts, and Roboflow does as well. Anyway, in our Discord I think there are folks who can help with this, if you haven't already brought up in the ml_training channel in there:
Thoughts?
Thanks,
Brandon