I have created a Blob file using Cobal "Easy_TinyYOLOv4_Object_Detector_Training_on_Custom_Data"
This created a json & blob files..
I have taken the example "22_2_tiny_yolo_v4_device_side_decoding.py" and changed the
MapLabel, SetNumClasses & nnPath variables appropriately.
I am getting the error
'System out of resources! Blob compiled for 14 shaves, but only 13 are available in current configuration'
Anybody have any ideas?... what have I missed?
Using a custom trained blob file with Gen2
gmurph The OAK supports a max configuration of 13 Shaves. The SHAVES are vector processors in DepthAI/OAK.
These SHAVES are used for other things in the device, like handling reformatting of images, doing some ISP, etc.
So the higher the resolution, the more SHAVES are consumed for this.
Read more - https://docs.luxonis.com/en/latest/pages/faq/#what-are-the-shaves
A workaround for this would be compiling your model for 7 Shaves which is the apt number. You could edit the parameters -sh 14
to -sh 7
before compiling the model and try again in the script.
If I use Gen1 ...
python depthai-demo.py -cnn CNN_DIR_NAME -sh 7
if I use Gen2 how do I set it?
Hello @gmurph, you are using a custom trained model, so that's not possible as of yet. Since you have followed the Easy_TinyYOLOv4_Object_Detector_Training_on_Custom_Data
notebook, how come you have changed the number of shaves the model is compiled for? By default it's 8. This is the code I'm referring to:
url = "http://69.164.214.171:8083/compile" # change if running against other URL
payload = {
'compiler_params': '-ip U8 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8',
'compile_type': 'myriad'
}
files = {
'definition': open(xmlfile, 'rb'),
'weights': open(binfile, 'rb')
}
params = {
'version': '2021.1', # OpenVINO version, can be "2021.1", "2020.4", "2020.3", "2020.2", "2020.1", "2019.R3"
}
Just figured it out ...
I was using the tutorial 'Creating an end-to-end Deep Learning Solution with SuperAnnotate'
The attached Colab code 'SuperAnnotate/OAK: YOLOv4-tiny Deployment.ipynb'
It has the section..
payload = {
'compile_type': 'myriad',
'compiler_params': '-ip U8 -VPU_MYRIAD_PLATFORM VPU_MYRIAD_2480 -VPU_NUMBER_OF_SHAVES 14 -VPU_NUMBER_OF_CMX_SLICES 14'
}
So this it wrong... !!! Can this be fixed in the tutorial?
So these figures should be changed to 8...
There may be another issue. Completed the exercise with my custom dataset using the SuperAnnotate Colab file above. Having copied the generated files into the resources/nn directory I run
python3 depthai_demo.py -cnn DS-v1 -cnn-size 416x416
It then throws the following error..
Using depthai module from: /home/graeme/.local/lib/python3.8/site-packages/depthai.cpython-38-x86_64-linux-gnu.so
Depthai version installed: 2.3.0.0
Downloading /home/graeme/.cache/blobconverter/DS-v1_openvino_2021.3_13shave.blob...
{
"exit_code": 1,
"message": "Command failed with exit code 1, command: /usr/bin/python3 model_compiler/openvino_2021.3/downloader.py --output_dir /tmp/blobconverter/92cd4f8d660f4773930175f3ccfe877f --cache_dir /tmp/modeldownloader/2021_3 --num_attempts 5 --name DS-v1 --model_root /tmp/blobconverter/92cd4f8d660f4773930175f3ccfe877f",
"stderr": "In config \"/tmp/blobconverter/92cd4f8d660f4773930175f3ccfe877f/DS-v1/model.yml\":\n In model \"DS-v1\":\n No XML file for precision \"FP16\"\n",
"stdout": ""
}
Traceback (most recent call last):
File "depthai_demo.py", line 315, in <module>
nn_manager = NNetManager(
File "depthai_demo.py", line 83, in init
self.blob_path = BlobManager(model_dir=self.model_dir, model_name=self.model_name).compile(conf.args.shaves)
File "/home/graeme/depthai/depthai_helpers/config_manager.py", line 235, in compile
return blobconverter.compile_blob(
File "/home/graeme/.local/lib/python3.8/site-packages/blobconverter/init.py", line 252, in compile_blob
response.raise_for_status()
File "/home/graeme/anaconda3/envs/DepthAI/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: http://luxonis.com:8080/compile?version=2021.3
I can access http://luxonis.com:8080 ... The version mentioned in the Colab file is 'version': '2020.1' ...
where as this wants to download version 2021.3 .... is it another error in the SuperAnnotate Colab file???
- Edited
gmurph
Yes, we upgraded the blobconverter under luxonis.com:8080 to latest version, and the API has also changed, therefore some of the notebooks need to be updated.
However, for convenience:
- we have enabled the old version under 69.164.214.171:8085, so if you don't have time to migrate, then you can just change the URL and it will work as it used to. Keep in mind however that it's depreciated
- to use the new API easily, we have released a blobconverter CLI tool (also importable inside a Python script), so that conversion can be done easier than before - https://pypi.org/project/blobconverter/
Let me know if you need any assistance either in migration or changing the URL, happy to help
I modified the following in .local/lib/python3.8/site-packages/blobconverter/init.py
and it did not work ...
__defaults = {
"url": "http://69.164.214.171:8085",
"version": Versions.v2021_3,
"shaves": 4,
"output_dir": Path.home() / Path('.cache/blobconverter'),
"compile_params": ["-ip U8"],
"data_type": "FP16",
"optimizer_params": [
"--mean_values=[127.5,127.5,127.5]",
"--scale_values=[255,255,255]",
],
"silent": False,
}
If I run it directly from the url http://69.164.214.171:8085/?version=2021.3
and set the 'Convert Sensor File' and selected 'Tensorflow' and uploaded the pb file it seemed to fail uploading the file... looked like it timed out...
Looks like I will need to go through the new GEN2 videos..
- Edited
gmurph
With the old blob converter, I was referring to this part of your message
Just figured it out ...
I was using the tutorial 'Creating an end-to-end Deep Learning Solution with SuperAnnotate'
The attached Colab code 'SuperAnnotate/OAK: YOLOv4-tiny Deployment.ipynb'
It has the section..
payload = {
'compile_type': 'myriad',
'compiler_params': '-ip U8 -VPU_MYRIAD_PLATFORM VPU_MYRIAD_2480 -VPU_NUMBER_OF_SHAVES 14 -VPU_NUMBER_OF_CMX_SLICES 14'
}
So this it wrong... !!! Can this be fixed in the tutorial?
So these figures should be changed to 8...
As the tutorial uses the old API that originally was hosted under http://luxonis.com:8080. So if you would like to use this tutorial as is, you can switch the URL for that request to 69.164.214.171:8085 and it should work.
Also, they mentioned deployment of custom models into depthai_demo.py - which was migrated to Gen2 recently.
So to follow the tutorial steps along, after checking the depthai
repository, please run
git checkout gen1_main
in order to have a Gen1 version of the repository, which this tutorial originally pointed to.
So if I have created a blob file with this code and it will not run on Gen2 becuase it uses a different version of OpenVino. To use this with Gen2 I have to use the Colab "Convert a Darknet YOLO model to BLOB" ... I have created a Yolov4-tiny weights file etc... this is colab file works with Yolov3-tiny NOT v4 ....
I am going to try and use the "openvino_21_2_tf_model_converter" colab file with the .pb tensorflow file created however I do not have the appropriate pipeline.config file for my blob file which is Yolov4-tiny NOT ssd_mobilenet_v2 which demo pipeline.config file refers to. If I use "Convert a Darknet YOLO model to BLOB" it uses the https://github.com/mystic123/tensorflow-yolo-v3 ... I have trained in yolo-v4 ... so that is not appropriate. What is the smartest way to proceed? Obviously I will need to work with Gen2 else you have to stop depthai from updating etc..
To get my trained dataset working I have now re-built my model in yolov3-tiny. It only has two classes. This gave me a new weights file which I tested using darknet and it works quite well. Using the weights and appropriate config files I then used the converter "openvino_21_2_darknet_yolo_model_converter.ipynb" ... this gave me my blob file. I then used "RGB & TinyYoloV3 decoding on device" to test the blob file on the OAK.. I ran it sucessfully with the example blob file. (although it does not appear to be very accurate) I changed the names of the classes, the number of classes and the pointer now to my blob file. It throws an error...
[14442C10011ED7D000] [23.364] [system] [critical] Fatal error. Please report to developers. Log: 'XLinkOut' '220'
Traceback (most recent call last):
File "GMtest.py", line 116, in <module>
inDet = qDet.get()
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'nn' (X_LINK_ERROR)'
What is the setting I'm missing?
Hi gmurph
Regarding model deployment on DepthAI - we have recently updated our docs with detailed instructions on how to add custom blobs to Gen2 demo - https://docs.luxonis.com/en/latest/pages/tutorials/first_steps/#using-custom-models
Regarding [14442C10011ED7D000] [23.364] [system] [critical] Fatal error. Please report to developers. Log: 'XLinkOut' '220'
- we have recently saw this issue in the other thread too (here), which means the output of the NN is too big.
I'm working now on updating the colab notebooks to meet the latest OpenVINO and have updated instructions with Gen2, will circle back when they're updated and I hope this will allow you to train and deploy the model without issues, sorry for the inconvenience!