Hi
Thank you for your response.
No, We are not allowed to use Aruco markers.
I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?
just like a normal camera.
Hi
Thank you for your response.
No, We are not allowed to use Aruco markers.
I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?
just like a normal camera.
Hi niloofarsabouri
Not really sure what you mean by non-processing, but you can get frames straight from the device, without any manipulations done to the image.
Thanks,
Jaka
Hi
Let me explain more.
I mean Oak operates as an normal camera just getting the frames. Is there any way to do this?
and I think you are expert in machine vision I need your consultants. This image a window with 4 corners and colored in blue must be detected by oak.
Our central computer is jetson tx2. it has a poor CPU.
I want Oak to detect this window and another obstacle.
To detect the window with Oak , what would you suggest?
Do I need deep learning and Train or just other ways
Thanks in advance.
Yeah, there are more way to approach it. If you just use OAK as a camera, and do the processing on the Jetson, you could follow and implement something like this square detector.
On the other hand, neural network should also be able to detect this, given the shape and color is very distinct. In that case you would need a relatively good and diverse dataset. I would say you'd need at least 100 images for the first iteration, then train the model. After training, you should inspect the test results you get from the training on "test" data (data not used for training). If performance is not good enough, you should collect and annotate more images and repeat.
Hi and thank you for your response.
Matija, I have just added my custom model in Oak GUI. By running the custom model I got wrong or false detection.
The Oak has showed some reactions. but they are not true…decetions are not blue window as I have mentioned before.
I am exceeding my dataset to 250 images and I have a question, Do you think something else has gone wrong?
How can I make Oak detect Truly ??
or this false detection is due to my dataset?
except increasing the dataset, what are other factors can cause this false detection?
the image below is the detections by custom model.
and another question, I have noticed in your custom model folder a file written in python and named by handler… what is handler and how it works and should have to write for my custom models?
Can you explain it more …
Thanks in advance.
Handler is how the bboxes are postprocessed and visualized. CC @jakaskerl to explain in more details.
For training, how are you training your models? The easiest way to get it working would be to follow one of our tutorials (like YoloV6) here: https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV6_training.ipynb. Then you can take the .pt weights and upload them to tools.luxonis.com.
In Inference section, you can see the inference of the trained model. I'd recommend you check there first. If model is working correctly, then try it on OAK (use tools.luxonis.com to make an export). If the predictions are not OK, try to add more images and quality data and retrained the model.
I'll let @jakaskerl answer how to add a Yolo to the OAK GUI.
Hi,
My friend insists on that Oak only woks with YOLO TINY 3, and his reason is in GUI we have detection by yolo tinyv3.
I have trained my model by Yolo v7. in my model I got nothing and false detection.
I am now exceeding my data set to 210 images.
and you have said Inference section…Can you explain it more?
Thanks alot.
niloofarsabouri
Can we first confirm the model behaves as expected, before adding handlers to GUI app.
Could you try and modify the script for RGB yolo preview. I assume you will be using YoloDetectionNetwork node (since you are using yolov7), so won't need additional on host decoding.
Anchors and anchor masks are not needed for yolov7 (I think) so you can remove that part. Play around with IOU threshold and confidence.
Thanks,
Jaka
niloofarsabouri
RGB yolo example I sent above will help you find out if the model works properly (pinpointing the issue to either the model or the way the depthai_demo is handling the results).
Use the example as is, but change the blob path to your own model blob file. Change the preview size to match the YOLO input size. Remove
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319])
detectionNetwork.setAnchorMasks({"side26": [1, 2, 3], "side13": [3, 4, 5]})
change other parameters to conform to your NN.
Thanks,
Jaka
Hi again,
I divided my dataset into 3 categories: train, test and val.
As you see in pictures I pointed the folders, but I got sth wrong… I can not understand…
Thanks.
Hi
what is wrong with this file?
I exactly place train, test and valid folder in the training notebook.
I am so confused…..
Can you explain it more and in detail?
I think the my training is not operates.
Hi @niloofarsabouri
Can't really tell what the issue is here. There looks to be a mistake in folder structure and model config. Could you maybe share the .ipynb with outputs.
Thanks,
Jaka
The Official Guide has some mistake with the folder structure, I came up with the same issue.
I modified the folder structure as following, just like other suggestion from different reource.
dataset/Image/train
dataset/Image/val
dataset/Image/test
dataset/label/train
dataset/label/val
dataset/label/test