jakaskerl

Matija

Hi,

Thanks for your help

I faced with a new problem…

requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2021.4&no_cache=False

why this is happening?

I renamed my file to 2021.4, but I still got the errors again.

I am still trying to solve the problem but get new error lines…

as you had mentioned in your repository:

python3 main_api.py -m <model_name> --config <config_json>

I put yolov7 in <model name> also I tried putting yolo but the same error…

and best.json for rest, and I got this error:

python3 main_api.py -m <model_name> --config <config_json>

can you explain more

I appreciate for your helps.

    niloofarsabouri

    When you specify -m yolo it tries to download it from our cloud. You should pass the path to your .blob path there. So, in .zip from tools.luxonis.com you will get the .json (which you are using correctly) and a .blob file. -m should be path to that blobfile.

      Matija

      Hi,

      sorry I am new in these commands .

      Would you mind explaining more … you are great.

      I want to write the command.

      If I don't mixed it up I have to specify my own path from this folder??

      I use this commad:

      python3 main.py --config best.json

      it gives me an error 400 bad request!!

        Matija

        sorry for my questions…

        it finally works by your help.

        how can I make it more accurate?

        I used 114 images to train it.

        and my oak can not detect good.

        114 images is barely the minimum. You should try to use more. The quality of annotations will also affect how well the model works.

        3 months later

        @Matija

        Hello and Thank you for your time.

        I was about to test my custom model by oak but in real the device (oak) could not detect the object.(false detection)

        I think all of you are super expert machine vision engineers , so I want if you mind helping me to solve my problem.

        the question is:

        I want to detect a blue window that drone should detect it and pass in central of window.

        I have searched and want your opinion that, window has 4 corners and these features are not enough to be detected… each object which has 4 corners is detected and labeled blue- window.

        in order to detect the window what should I do with Oak?

        for custom dataset how many pictures and do I need images with background?

        If you were in my place what would you do?

        Thanks in advance.

          Hi niloofarsabouri
          Do you absolutely need the NN to perform this task? Can you modify the windows in any way?

          We had a competition where the goal was to as quickly as possible, fly a drone through a series of hoops. But each hoop had an Aruco marker glued on the pole on which the hoop was mounted. By using a marker, you would know the distance from the hoop center to the drone.

          Would this be possible in your case?

          Thanks,
          Jaka

            jakaskerl

            Hi

            Thank you for your response.

            No, We are not allowed to use Aruco markers.

            I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?

            just like a normal camera.

              Hi niloofarsabouri
              Not really sure what you mean by non-processing, but you can get frames straight from the device, without any manipulations done to the image.

              Thanks,
              Jaka

                jakaskerl

                Hi

                Let me explain more.

                I mean Oak operates as an normal camera just getting the frames. Is there any way to do this?

                and I think you are expert in machine vision I need your consultants. This image a window with 4 corners and colored in blue must be detected by oak.

                Our central computer is jetson tx2. it has a poor CPU.

                I want Oak to detect this window and another obstacle.

                To detect the window with Oak , what would you suggest?

                Do I need deep learning and Train or just other ways

                Thanks in advance.

                  niloofarsabouri

                  Yeah, there are more way to approach it. If you just use OAK as a camera, and do the processing on the Jetson, you could follow and implement something like this square detector.

                  On the other hand, neural network should also be able to detect this, given the shape and color is very distinct. In that case you would need a relatively good and diverse dataset. I would say you'd need at least 100 images for the first iteration, then train the model. After training, you should inspect the test results you get from the training on "test" data (data not used for training). If performance is not good enough, you should collect and annotate more images and repeat.

                    Matija

                    Hi and thank you for your response.

                    Matija, I have just added my custom model in Oak GUI. By running the custom model I got wrong or false detection.

                    The Oak has showed some reactions. but they are not true…decetions are not blue window as I have mentioned before.

                    I am exceeding my dataset to 250 images and I have a question, Do you think something else has gone wrong?

                    How can I make Oak detect Truly ??

                    or this false detection is due to my dataset?

                    except increasing the dataset, what are other factors can cause this false detection?

                    the image below is the detections by custom model.

                    and another question, I have noticed in your custom model folder a file written in python and named by handler… what is handler and how it works and should have to write for my custom models?

                    Can you explain it more …

                    Thanks in advance.

                    Handler is how the bboxes are postprocessed and visualized. CC @jakaskerl to explain in more details.

                    For training, how are you training your models? The easiest way to get it working would be to follow one of our tutorials (like YoloV6) here: https://colab.research.google.com/github/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV6_training.ipynb. Then you can take the .pt weights and upload them to tools.luxonis.com.

                    In Inference section, you can see the inference of the trained model. I'd recommend you check there first. If model is working correctly, then try it on OAK (use tools.luxonis.com to make an export). If the predictions are not OK, try to add more images and quality data and retrained the model.

                    I'll let @jakaskerl answer how to add a Yolo to the OAK GUI.

                      Matija

                      Hi,

                      My friend insists on that Oak only woks with YOLO TINY 3, and his reason is in GUI we have detection by yolo tinyv3.

                      I have trained my model by Yolo v7. in my model I got nothing and false detection.

                      I am now exceeding my data set to 210 images.

                      and you have said Inference section…Can you explain it more?

                      Thanks alot.

                        niloofarsabouri
                        Can we first confirm the model behaves as expected, before adding handlers to GUI app.
                        Could you try and modify the script for RGB yolo preview. I assume you will be using YoloDetectionNetwork node (since you are using yolov7), so won't need additional on host decoding.
                        Anchors and anchor masks are not needed for yolov7 (I think) so you can remove that part. Play around with IOU threshold and confidence.

                        Thanks,
                        Jaka

                          niloofarsabouri
                          RGB yolo example I sent above will help you find out if the model works properly (pinpointing the issue to either the model or the way the depthai_demo is handling the results).

                          Use the example as is, but change the blob path to your own model blob file. Change the preview size to match the YOLO input size. Remove

                          detectionNetwork.setCoordinateSize(4)
                          detectionNetwork.setAnchors([10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319])
                          detectionNetwork.setAnchorMasks({"side26": [1, 2, 3], "side13": [3, 4, 5]})

                          change other parameters to conform to your NN.

                          Thanks,
                          Jaka

                            jakaskerl

                            Hi again,

                            I divided my dataset into 3 categories: train, test and val.

                            As you see in pictures I pointed the folders, but I got sth wrong… I can not understand…

                            Thanks.

                            jakaskerl

                            Hi

                            what is wrong with this file?

                            I exactly place train, test and valid folder in the training notebook.

                            I am so confused…..

                            Can you explain it more and in detail?

                            I think the my training is not operates.