I have the .blob file of my model but I can't run it on the camera to make detections
How can I run my custom model in my oak d lite?
Hi RicardoGuadalupeGomezMartnez
Can you explain in more detail:
-what model are you using (random NN, yolo, mobilenet)
-how you are incorporating your model into the pipeline
-what the output looks like
Thanks,
Jaka
jakaskerl I am using yolov5 and this example was based on https://github.com/luxonis/depthai-python/blob/main/examples/Yolo/tiny_yolo.py but it gives me an error on lines 74 and 75
Hi RicardoGuadalupeGomezMartnez
When you convert your model to .blob using tools.luxonis.com, you also get the .json file describing the anchor sizes and masks. You need to set those in lines 74 and 75 for the model to work properly.
Thanks,
Jaka
jakaskerl
I already made yolov5 work but when I retrain yolov8 it happens to me that the labels no longer take and it goes like this only class 0 and 1, by the way thank you very much for the help.
to retrain I am using https://hub.ultralytics.com/
RicardoGuadalupeGomezMartnez perhaps you could just rename the labels so it's correct?
erik My labels are correct even in the predictions, only when I convert my model to .blob they are not taken into account, if I change the labels in Jason the changes are not applied.
I would need to see the code. If you are not reading the JSON in your script, then you need to change the labelMap in the code. If you are using the JSON, it will use those labels. For example, if you try running it with main_api.py from here and specify the modified JSON with -c path/to/config.json and -m path/to/model.blobit will read it from the JSON directly. Labels are not stored in .blob and should be passed through the code.
Matija and for mobilenet? What is the process to retrain and convert the camera?
- Best Answerset by RicardoGuadalupeGomezMartnez
The MobileNet tutorial at the moment is outdated. We are working on a refactor and its migration to Pytorch with an export process that will make it compatible with MobilenetDetectionNetwork. It should be release in the following days.
- Edited
Matija I get it, I have been trying to convert it to a blob through this page http://blobconverter.luxonis.com/ and it always gives me an error
========== Converting savedmodel to IR (FP16) Conversion command: /app/venvs/venv2022_1/bin/python -- /app/venvs/venv2022_1/bin/mo --framework=tf --data_type=FP16 --output_dir=/tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16 --model_name=savedmodel --input= --data_type=FP16 '--mean_values=[127.5,127.5,127.5]' '--scale_values=[255,255,255]' --input_model=/tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16/savedmodel.pb Model Optimizer arguments: Common parameters: - Path to the Input Model: /tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16/savedmodel.pb - Path for generated IR: /tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16 - IR output name: savedmodel - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Source layout: Not specified - Target layout: Not specified - Layout: Not specified - Mean values: [127.5,127.5,127.5] - Scale values: [255,255,255] - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - User transformations: Not specified - Reverse input channels: False - Enable IR generation for fixed input shape: False - Use the transformations config file: None Advanced parameters: - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False - Force the usage of new Frontend of Model Optimizer for model conversion into IR: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None OpenVINO runtime found in: /opt/intel/openvino2022_1/python/python3.8/openvino OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 FAILED: savedmodel
Yes, if you check the mo.py (in the tutorial), you can see that many flags regarding the TF operations and so on are passed to it. While this is possible to some degree with online blobconverter (the path would have to be changed slightly), it is not possible to modify them (like in the cell above mo.py).
RicardoGuadalupeGomezMartnez Did you find a solution for labels becoming just class 0 and 1 when converting to a blob?
