- Edited
StavanRupareliya Unfortunately not, as i do not have the access to it anymore.
What is the model size you are trying to do?
StavanRupareliya Unfortunately not, as i do not have the access to it anymore.
What is the model size you are trying to do?
Hey!
sorry for the late reply.
Here is the JSON as of today.
{
"nn_config": {
"output_format": "detection",
"NN_family": "YOLO",
"input_size" : "416x416",
"NN_specific_metadata": {
"classes": 1,
"coordinates": 4,
"anchors": [
10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373,
326
],
"anchor_masks" : {
"side52": [0, 1, 2],
"side26": [3, 4, 5],
"side13": [6, 7, 8]
},
"iou_threshold": 0.5,
"confidence_threshold": 0.65
}
},
"mappings": {
"labels": ["plate"]
}
}
Hope this helps!
Resolved thanks to Conor on discord.
Issue was that anchor masks change based on the actual model size, so for 640x640 i needed 80/40/20 masks in the config. To calculate it we should do size/32, size/16, size/8, and use the obtained anchor masks.
erik Hey!
Yea, i'm trying to change the model to a bigger size, currently trying a 640x640. The first model i trained was at 416x416 and it worked fine with the provided JSON.
The issue is that from what i realized, the model size is directly tied to the size of preview window and it being at 416x416 is just a tad bit small, therefore i want to up the size a little bit.
erik Hey Erik!
Which json are you exactly referring to?
The only mentions of json in the notebook are save_json=False
in the validation step, and in the on-device decoding where its explained that we should only just change num of classes and labels.
What am i missing here?
Thanks in advance!
Hey!
So for some time now, i've been working on using the Oak-D Lite camera, and i've successfully managed to incorporate a YoloV5 model. The issue is that the original model that i've made is of size 416x416 and the image returned is a bit small. For some time now i've been trying to re-train the model using the notebook you provided, with slightly larger image size but for some reason it simply does not work.
The pipeline starts regularly, and the preview image is indeed larger but the following error keeps occuring in the console:
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '80'. Define at pipeline build time using: 'setAnchorMasks' for 'side80'.
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '40'. Define at pipeline build time using: 'setAnchorMasks' for 'side40'.
[1844301041DDC21200] [344.188] [SpatialDetectionNetwork(1)] [error] Mask is not defined for output layer with width '20'. Define at pipeline build time using: 'setAnchorMasks' for 'side20'.
The anchors and anchor masks in the cfg json are as following :
"anchors": [
10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326
],
"anchor_masks": {
"side52": [0, 1, 2],
"side26": [3, 4, 5],
"side13": [6, 7, 8]
},
What could be the issue here? Any help is appriciated!
Hey!
So I've been stuck for a day or so trying to implement object tracking on my yolov5 model.
The program never even completely launches, it gets stuck at certain points.
Here is the combination of object tracking example provided in the documentation, with my model incorporated.
The JSON config is as following:
The model works just fine when doing object detection + spatial depth detection, but whenever i try to run any sort of object tracking with it, everything just hangs. I've managed to narrow it a bit and from what i realized is that it gets stuck at track = tracklets.get()
part and im not sure why.
Important to say that the example you've provided works flawlessly.
What could be the cause of this? Any help is appreciated!
Edit: A bit of an update.
I've done some tinkering with the pipeline and managed to get it to launch.
Hi all!
I'm trying to implement my custom yolov5 model into the oak-d-lite camera but i stumbled upon an issue where i didnt find much help.
The model was trained and i got an .onnx file, which worked pretty well directly through openCV. I did the conversion into .blob using http://blobconverter.luxonis.com/ and obtained my .blob file. Also i've made the corresponding JSON config file as instructed, changing the number of classes and their labes as shown on the bottom of the page here https://github.com/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV5_training.ipynb
Now onto the issue.
I've loaded up my blob and cfg into the program and this error keeps popping out.
[1844301041DDC21200] [33.853] [DetectionNetwork(1)] [error] Mask is not defined for output layer with width '6'. Define at pipeline build time using: 'setAnchorMasks' for 'side6'.
Any help is appriciated!