Hi,
I am training my custom data set to detect window.
I have tried on many versions of YOLO. the problem is when the training is completed I have to jump to next level to convert the weights but I got the error.
Can some one help me??
Thanks.
Hi,
I am training my custom data set to detect window.
I have tried on many versions of YOLO. the problem is when the training is completed I have to jump to next level to convert the weights but I got the error.
Can some one help me??
Thanks.
Hi niloofarsabouri ,
Have you also tried notebooks with newer yolo versions (yolov6-yolov8)?
Hi,
yes I have now started to train with YOLO7. but I discovered that it has used PASCAL VOC and then converted to YOLO format.
I will respond you as soon as possible.
following step by step.
Hi, Erik
Can you explain in YOLO7 , it has been discussed that prepare your own dataset in yolo format but, why VOC format is used and has been converted to YOLO format then?
thanks
Hi niloofarsabouri
I think the voc is used here for ease of use. since the dataset is pre-made. Also it helps show how to convert between formats since many datasets exist in VOC. According to yolov6 training notebook:
If you are using a custom dataset, you will have to prepare your dataset for training YOLOv6 Wiki. Once you have set up YAML and sorted labels and images into the right directories, you can continue with the next step.
Luckily, we are using VOC, for which YAML already exists. If you inspect the YAML. We'll be the following tutorial on how to train YOLOV6 on the VOC data set.
Thanks,
Jaka
Hi,
Thanks for your reply.
Can you explain what these lines are talking about?
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar -O ./VOCdevkit/VOCtrainval_06-Nov-2007.tar
I think by this line, we can download a data set and I don't understand this part" -O ./VOCdevkit/VOCtrainval_06-Nov-2007.tar"
If I want to substitute my own data set what should I do?
can you explain more.
Thanks
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar -O ./VOCdevkit/VOCtrainval_06-Nov-2007.tar
Let's break it down:
wget
- This is a command-line utility to download files from the internet. It supports downloading via HTTP, HTTPS, and FTP.http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
- The URL of the file you want to download, in this case, the PASCAL VOC 2007 training/validation data.-O
- An option for wget
that allows you to specify the output filename and path../VOCdevkit/VOCtrainval_06-Nov-2007.tar
- Where you want the downloaded file to be saved. The .
indicates the current directory, so it will save in the VOCdevkit
subdirectory with the provided filename.If you wish to substitute your data, just place the data some folder on your machine and point to it later in the tutorial when path for the dataset is used.
Thanks,
Jaka
Hi,
as it mentioned on site.
we have to make 2 paths for test and trian.
my dataset is already in yolo format so i ignore the cells which are discussing about converting VOC to YOLO.
but I got this error:
I don't understand what to do…
can you explain and help me more?
Hi again,
I just skipped and now faced with a new error I think:
I think in last line which is relating to a class is not correct!
my class has not been recognized and these two files in exp5 are not exist!
sorry for replying too much…
the main problem is:
I don't know where the problem is!
Thanks all for help
If you've got it trained, path to weights should likely be runs/train/exp5/weights/best.pt. Notice the 5 in exp5 (not just exp). Which number it is should be based on the experiment that was successful. Assuming it was the last run, check what the exp with the highest number is in runs/train folder.
Hi..
and thanks alot!
I got the result folder..
i mean go to the Luxions page and convert best.pt to the file
now to clone to git I got this error. I googled it but it does not make sense.
can you help me?
Thanks
Hi niloofarsabouri
Clone it using https:
git clone https://github.com/luxonis/depthai-experiments.git
What you are doing is cloning it using SSH, for which you need a key.
Hope this helps,
Jaka
Hi,
Thanks for your help
I faced with a new problem…
requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2021.4&no_cache=False
I renamed my file to 2021.4, but I still got the errors again.
I am still trying to solve the problem but get new error lines…
as you had mentioned in your repository:
python3 main_api.py -m <model_name> --config <config_json>
I put yolov7 in <model name> also I tried putting yolo but the same error…
and best.json for rest, and I got this error:
python3 main_api.py -m <model_name> --config <config_json>
can you explain more
I appreciate for your helps.
When you specify -m yolo
it tries to download it from our cloud. You should pass the path to your .blob path there. So, in .zip from tools.luxonis.com you will get the .json (which you are using correctly) and a .blob file. -m should be path to that blobfile.
Hi,
sorry I am new in these commands .
Would you mind explaining more … you are great.
I want to write the command.
If I don't mixed it up I have to specify my own path from this folder??
I use this commad:
python3 main.py --config best.json
it gives me an error 400 bad request!!
python3 main_api.py -c best.json -m best_openvino_2022.1_6shave.blob
sorry for my questions…
it finally works by your help.
how can I make it more accurate?
I used 114 images to train it.
and my oak can not detect good.
114 images is barely the minimum. You should try to use more. The quality of annotations will also affect how well the model works.
Hello and Thank you for your time.
I was about to test my custom model by oak but in real the device (oak) could not detect the object.(false detection)
I think all of you are super expert machine vision engineers , so I want if you mind helping me to solve my problem.
the question is:
I want to detect a blue window that drone should detect it and pass in central of window.
I have searched and want your opinion that, window has 4 corners and these features are not enough to be detected… each object which has 4 corners is detected and labeled blue- window.
in order to detect the window what should I do with Oak?
for custom dataset how many pictures and do I need images with background?
If you were in my place what would you do?
Thanks in advance.
Hi niloofarsabouri
Do you absolutely need the NN to perform this task? Can you modify the windows in any way?
We had a competition where the goal was to as quickly as possible, fly a drone through a series of hoops. But each hoop had an Aruco marker glued on the pole on which the hoop was mounted. By using a marker, you would know the distance from the hoop center to the drone.
Would this be possible in your case?
Thanks,
Jaka
Hi
Thank you for your response.
No, We are not allowed to use Aruco markers.
I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?
just like a normal camera.
Hi niloofarsabouri
Not really sure what you mean by non-processing, but you can get frames straight from the device, without any manipulations done to the image.
Thanks,
Jaka
Hi
Let me explain more.
I mean Oak operates as an normal camera just getting the frames. Is there any way to do this?
and I think you are expert in machine vision I need your consultants. This image a window with 4 corners and colored in blue must be detected by oak.
Our central computer is jetson tx2. it has a poor CPU.
I want Oak to detect this window and another obstacle.
To detect the window with Oak , what would you suggest?
Do I need deep learning and Train or just other ways
Thanks in advance.
Yeah, there are more way to approach it. If you just use OAK as a camera, and do the processing on the Jetson, you could follow and implement something like this square detector.
On the other hand, neural network should also be able to detect this, given the shape and color is very distinct. In that case you would need a relatively good and diverse dataset. I would say you'd need at least 100 images for the first iteration, then train the model. After training, you should inspect the test results you get from the training on "test" data (data not used for training). If performance is not good enough, you should collect and annotate more images and repeat.
Hi and thank you for your response.
Matija, I have just added my custom model in Oak GUI. By running the custom model I got wrong or false detection.
The Oak has showed some reactions. but they are not true…decetions are not blue window as I have mentioned before.
I am exceeding my dataset to 250 images and I have a question, Do you think something else has gone wrong?
How can I make Oak detect Truly ??
or this false detection is due to my dataset?
except increasing the dataset, what are other factors can cause this false detection?
the image below is the detections by custom model.
and another question, I have noticed in your custom model folder a file written in python and named by handler… what is handler and how it works and should have to write for my custom models?
Can you explain it more …
Thanks in advance.
Handler is how the bboxes are postprocessed and visualized. CC @jakaskerl to explain in more details.
For training, how are you training your models? The easiest way to get it working would be to follow one of our tutorials (like YoloV6) here: https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV6_training.ipynb. Then you can take the .pt weights and upload them to tools.luxonis.com.
In Inference section, you can see the inference of the trained model. I'd recommend you check there first. If model is working correctly, then try it on OAK (use tools.luxonis.com to make an export). If the predictions are not OK, try to add more images and quality data and retrained the model.
I'll let @jakaskerl answer how to add a Yolo to the OAK GUI.
Hi,
My friend insists on that Oak only woks with YOLO TINY 3, and his reason is in GUI we have detection by yolo tinyv3.
I have trained my model by Yolo v7. in my model I got nothing and false detection.
I am now exceeding my data set to 210 images.
and you have said Inference section…Can you explain it more?
Thanks alot.
niloofarsabouri
Can we first confirm the model behaves as expected, before adding handlers to GUI app.
Could you try and modify the script for RGB yolo preview. I assume you will be using YoloDetectionNetwork node (since you are using yolov7), so won't need additional on host decoding.
Anchors and anchor masks are not needed for yolov7 (I think) so you can remove that part. Play around with IOU threshold and confidence.
Thanks,
Jaka
niloofarsabouri
RGB yolo example I sent above will help you find out if the model works properly (pinpointing the issue to either the model or the way the depthai_demo is handling the results).
Use the example as is, but change the blob path to your own model blob file. Change the preview size to match the YOLO input size. Remove
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319])
detectionNetwork.setAnchorMasks({"side26": [1, 2, 3], "side13": [3, 4, 5]})
change other parameters to conform to your NN.
Thanks,
Jaka
Hi again,
I divided my dataset into 3 categories: train, test and val.
As you see in pictures I pointed the folders, but I got sth wrong… I can not understand…
Thanks.
Hi
what is wrong with this file?
I exactly place train, test and valid folder in the training notebook.
I am so confused…..
Can you explain it more and in detail?
I think the my training is not operates.
Hi @niloofarsabouri
Can't really tell what the issue is here. There looks to be a mistake in folder structure and model config. Could you maybe share the .ipynb with outputs.
Thanks,
Jaka
The Official Guide has some mistake with the folder structure, I came up with the same issue.
I modified the folder structure as following, just like other suggestion from different reource.
dataset/Image/train
dataset/Image/val
dataset/Image/test
dataset/label/train
dataset/label/val
dataset/label/test