- Edited
Hi again,
I just skipped and now faced with a new error I think:
I think in last line which is relating to a class is not correct!
my class has not been recognized and these two files in exp5 are not exist!
Hi again,
I just skipped and now faced with a new error I think:
I think in last line which is relating to a class is not correct!
my class has not been recognized and these two files in exp5 are not exist!
sorry for replying too much…
the main problem is:
I don't know where the problem is!
Thanks all for help
If you've got it trained, path to weights should likely be runs/train/exp5/weights/best.pt. Notice the 5 in exp5 (not just exp). Which number it is should be based on the experiment that was successful. Assuming it was the last run, check what the exp with the highest number is in runs/train folder.
Hi..
and thanks alot!
I got the result folder..
i mean go to the Luxions page and convert best.pt to the file
now to clone to git I got this error. I googled it but it does not make sense.
can you help me?
Thanks
Hi niloofarsabouri
Clone it using https:
git clone https://github.com/luxonis/depthai-experiments.git
What you are doing is cloning it using SSH, for which you need a key.
Hope this helps,
Jaka
Hi,
Thanks for your help
I faced with a new problem…
requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2021.4&no_cache=False
I renamed my file to 2021.4, but I still got the errors again.
I am still trying to solve the problem but get new error lines…
as you had mentioned in your repository:
python3 main_api.py -m <model_name> --config <config_json>
I put yolov7 in <model name> also I tried putting yolo but the same error…
and best.json for rest, and I got this error:
python3 main_api.py -m <model_name> --config <config_json>
can you explain more
I appreciate for your helps.
When you specify -m yolo
it tries to download it from our cloud. You should pass the path to your .blob path there. So, in .zip from tools.luxonis.com you will get the .json (which you are using correctly) and a .blob file. -m should be path to that blobfile.
Hi,
sorry I am new in these commands .
Would you mind explaining more … you are great.
I want to write the command.
If I don't mixed it up I have to specify my own path from this folder??
I use this commad:
python3 main.py --config best.json
it gives me an error 400 bad request!!
python3 main_api.py -c best.json -m best_openvino_2022.1_6shave.blob
sorry for my questions…
it finally works by your help.
how can I make it more accurate?
I used 114 images to train it.
and my oak can not detect good.
114 images is barely the minimum. You should try to use more. The quality of annotations will also affect how well the model works.
Hello and Thank you for your time.
I was about to test my custom model by oak but in real the device (oak) could not detect the object.(false detection)
I think all of you are super expert machine vision engineers , so I want if you mind helping me to solve my problem.
the question is:
I want to detect a blue window that drone should detect it and pass in central of window.
I have searched and want your opinion that, window has 4 corners and these features are not enough to be detected… each object which has 4 corners is detected and labeled blue- window.
in order to detect the window what should I do with Oak?
for custom dataset how many pictures and do I need images with background?
If you were in my place what would you do?
Thanks in advance.
Hi niloofarsabouri
Do you absolutely need the NN to perform this task? Can you modify the windows in any way?
We had a competition where the goal was to as quickly as possible, fly a drone through a series of hoops. But each hoop had an Aruco marker glued on the pole on which the hoop was mounted. By using a marker, you would know the distance from the hoop center to the drone.
Would this be possible in your case?
Thanks,
Jaka
Hi
Thank you for your response.
No, We are not allowed to use Aruco markers.
I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?
just like a normal camera.
Hi niloofarsabouri
Not really sure what you mean by non-processing, but you can get frames straight from the device, without any manipulations done to the image.
Thanks,
Jaka
Hi
Let me explain more.
I mean Oak operates as an normal camera just getting the frames. Is there any way to do this?
and I think you are expert in machine vision I need your consultants. This image a window with 4 corners and colored in blue must be detected by oak.
Our central computer is jetson tx2. it has a poor CPU.
I want Oak to detect this window and another obstacle.
To detect the window with Oak , what would you suggest?
Do I need deep learning and Train or just other ways
Thanks in advance.
Yeah, there are more way to approach it. If you just use OAK as a camera, and do the processing on the Jetson, you could follow and implement something like this square detector.
On the other hand, neural network should also be able to detect this, given the shape and color is very distinct. In that case you would need a relatively good and diverse dataset. I would say you'd need at least 100 images for the first iteration, then train the model. After training, you should inspect the test results you get from the training on "test" data (data not used for training). If performance is not good enough, you should collect and annotate more images and repeat.
Hi and thank you for your response.
Matija, I have just added my custom model in Oak GUI. By running the custom model I got wrong or false detection.
The Oak has showed some reactions. but they are not true…decetions are not blue window as I have mentioned before.
I am exceeding my dataset to 250 images and I have a question, Do you think something else has gone wrong?
How can I make Oak detect Truly ??
or this false detection is due to my dataset?
except increasing the dataset, what are other factors can cause this false detection?
the image below is the detections by custom model.
and another question, I have noticed in your custom model folder a file written in python and named by handler… what is handler and how it works and should have to write for my custom models?
Can you explain it more …
Thanks in advance.
Handler is how the bboxes are postprocessed and visualized. CC @jakaskerl to explain in more details.
For training, how are you training your models? The easiest way to get it working would be to follow one of our tutorials (like YoloV6) here: https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV6_training.ipynb. Then you can take the .pt weights and upload them to tools.luxonis.com.
In Inference section, you can see the inference of the trained model. I'd recommend you check there first. If model is working correctly, then try it on OAK (use tools.luxonis.com to make an export). If the predictions are not OK, try to add more images and quality data and retrained the model.
I'll let @jakaskerl answer how to add a Yolo to the OAK GUI.