Hi niloofarsabouri

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar -O ./VOCdevkit/VOCtrainval_06-Nov-2007.tar

Let's break it down:

  1. wget - This is a command-line utility to download files from the internet. It supports downloading via HTTP, HTTPS, and FTP.
  2. http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar - The URL of the file you want to download, in this case, the PASCAL VOC 2007 training/validation data.
  3. -O - An option for wget that allows you to specify the output filename and path.
  4. ./VOCdevkit/VOCtrainval_06-Nov-2007.tar - Where you want the downloaded file to be saved. The . indicates the current directory, so it will save in the VOCdevkit subdirectory with the provided filename.

If you wish to substitute your data, just place the data some folder on your machine and point to it later in the tutorial when path for the dataset is used.

Thanks,
Jaka

    jakaskerl

    Hi,

    as it mentioned on site.

    we have to make 2 paths for test and trian.

    my dataset is already in yolo format so i ignore the cells which are discussing about converting VOC to YOLO.

    but I got this error:

    I don't understand what to do…

    can you explain and help me more?

    @jakaskerl

    Hi again,

    I just skipped and now faced with a new error I think:

    I think in last line which is relating to a class is not correct!

    my class has not been recognized and these two files in exp5 are not exist!

    jakaskerl

    sorry for replying too much…

    the main problem is:

    I don't know where the problem is!

    Thanks all for help

      niloofarsabouri

      If you've got it trained, path to weights should likely be runs/train/exp5/weights/best.pt. Notice the 5 in exp5 (not just exp). Which number it is should be based on the experiment that was successful. Assuming it was the last run, check what the exp with the highest number is in runs/train folder.

        Matija

        Hi..

        and thanks alot!

        I got the result folder..

        i mean go to the Luxions page and convert best.pt to the file

        now to clone to git I got this error. I googled it but it does not make sense.

        can you help me?

        Thanks

          jakaskerl

          Matija

          Hi,

          Thanks for your help

          I faced with a new problem…

          requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2021.4&no_cache=False

          why this is happening?

          I renamed my file to 2021.4, but I still got the errors again.

          I am still trying to solve the problem but get new error lines…

          as you had mentioned in your repository:

          python3 main_api.py -m <model_name> --config <config_json>

          I put yolov7 in <model name> also I tried putting yolo but the same error…

          and best.json for rest, and I got this error:

          python3 main_api.py -m <model_name> --config <config_json>

          can you explain more

          I appreciate for your helps.

            niloofarsabouri

            When you specify -m yolo it tries to download it from our cloud. You should pass the path to your .blob path there. So, in .zip from tools.luxonis.com you will get the .json (which you are using correctly) and a .blob file. -m should be path to that blobfile.

              Matija

              Hi,

              sorry I am new in these commands .

              Would you mind explaining more … you are great.

              I want to write the command.

              If I don't mixed it up I have to specify my own path from this folder??

              I use this commad:

              python3 main.py --config best.json

              it gives me an error 400 bad request!!

                Matija

                sorry for my questions…

                it finally works by your help.

                how can I make it more accurate?

                I used 114 images to train it.

                and my oak can not detect good.

                114 images is barely the minimum. You should try to use more. The quality of annotations will also affect how well the model works.

                3 months later

                @Matija

                Hello and Thank you for your time.

                I was about to test my custom model by oak but in real the device (oak) could not detect the object.(false detection)

                I think all of you are super expert machine vision engineers , so I want if you mind helping me to solve my problem.

                the question is:

                I want to detect a blue window that drone should detect it and pass in central of window.

                I have searched and want your opinion that, window has 4 corners and these features are not enough to be detected… each object which has 4 corners is detected and labeled blue- window.

                in order to detect the window what should I do with Oak?

                for custom dataset how many pictures and do I need images with background?

                If you were in my place what would you do?

                Thanks in advance.

                  Hi niloofarsabouri
                  Do you absolutely need the NN to perform this task? Can you modify the windows in any way?

                  We had a competition where the goal was to as quickly as possible, fly a drone through a series of hoops. But each hoop had an Aruco marker glued on the pole on which the hoop was mounted. By using a marker, you would know the distance from the hoop center to the drone.

                  Would this be possible in your case?

                  Thanks,
                  Jaka

                    jakaskerl

                    Hi

                    Thank you for your response.

                    No, We are not allowed to use Aruco markers.

                    I am thinking about Is it possible for Oak to get a non-processing fram in RGB ?

                    just like a normal camera.

                      Hi niloofarsabouri
                      Not really sure what you mean by non-processing, but you can get frames straight from the device, without any manipulations done to the image.

                      Thanks,
                      Jaka

                        jakaskerl

                        Hi

                        Let me explain more.

                        I mean Oak operates as an normal camera just getting the frames. Is there any way to do this?

                        and I think you are expert in machine vision I need your consultants. This image a window with 4 corners and colored in blue must be detected by oak.

                        Our central computer is jetson tx2. it has a poor CPU.

                        I want Oak to detect this window and another obstacle.

                        To detect the window with Oak , what would you suggest?

                        Do I need deep learning and Train or just other ways

                        Thanks in advance.

                          niloofarsabouri

                          Yeah, there are more way to approach it. If you just use OAK as a camera, and do the processing on the Jetson, you could follow and implement something like this square detector.

                          On the other hand, neural network should also be able to detect this, given the shape and color is very distinct. In that case you would need a relatively good and diverse dataset. I would say you'd need at least 100 images for the first iteration, then train the model. After training, you should inspect the test results you get from the training on "test" data (data not used for training). If performance is not good enough, you should collect and annotate more images and repeat.