Hello,

I am in need of a little guidance as I am unsure exactly how to proceed with my project. 

I have a Jetson Nano B01 setup and running Yolov7 with CUDA support.  What I’d like to do is use an OAK-D-Lite to generate the color frames and use Yolov7 on my Jetson for my Object Detection. To do this I need to get DepthAI installed into the same environment as the Yolov7.

I looked up the dependencies and requirements for DepthAI posted on the Github pages and checked them vs. what I have currently installed.

In my Yolov7 Build I have the following installed.

Ubuntu 18.04, Jetpack 4.6.4, PyTorch  1.8.0, TorchVision 0.9.0, CUDA 10.2.3, TensorRT 8.2.1.8,       pycuda 2022.1, Future 0.18.3, Cython 3.0.4, Pillow 8.4.0, Python: 3.6.9, Numpy 1.19.4, Matplotlib 3.3.4, OpenCV 4.5.1 w/ CUDA, and a 5.9G SWAP memory

I checked the python install_dependencies.sh and I found that I am only missing a few packages
libilmbase-dev, libopenexr-dev, libjasper-dev, libdc1394-dev, qt5-qmake, qtbase5-dev-tools, qml-module-qtquick-layouts, qml-module-qtquick-window2, qtbase5-dev, python3-pyqt5.qtquick, qml-module-qtquick-controls2, qml-module-qt-labs-platform, qtdeclarative5-dev

Then I checked the GitHub pages for dependencies and found
CMAKE >=3.10, C/C++14 compiler, and OpenCV4

I looked at install_requirements.py but got a little lost on what exactly is needed and what would just overwrite what I already have.

I think I have what it needs

Can I modify the install_dependencies.sh to install what I am missing, then just clone the repository, and then edit the .bash file?  Where I get stuck is the python install_requirements.py, because I am unsure how to modify it so as not overwrite what I already have working.

Am I on the right path or is there a better way to do this?

Thanks!

Hi @SeanWorcester
You can probably just do:
sudo apt install -y libilmbase-dev, libopenexr-dev, libjasper-dev, libdc1394-dev, qt5-qmake, qtbase5-dev-tools, qml-module-qtquick-layouts, qml-module-qtquick-window2, qtbase5-dev, python3-pyqt5.qtquick, qml-module-qtquick-controls2, qml-module-qt-labs-platform, qtdeclarative5-dev since these get installed globally anyway.

Then, inside your env, python3 -m pip install depthai -U, which will install depthai and it's dependencies. Keep in mind, if depthai requires a higher version of numpy, it will install it. Dependencies have a required version for a reason. This probably won't break your currently functioning environment anyway.

Thanks,
Jaka

    Hello jakaskerl

    So I did like you suggested and I installed all the packages, but when I ran python rgb_preview.py i got the following error message

    [2024-01-10 19:29:54.071] [depthai] [warning] Insufficient permissions to communicate with X_LINK_UNBOOTED device with name "1.2.2". Make sure udev rules are set

     [2024-01-10 19:29:55.095] [depthai] [warning] Insufficient permissions to communicate with X_LINK_UNBOOTED device having name "1.2.2". Make sure udev rules are set

    Traceback (most recent call last):

      File "rgb_preview.py", line 24, in <module>

        with dai.Device(pipeline) as device:

     RuntimeError: No available device

    it seems this means that the udev rules are not set on my Jetson and on the troubleshooting page I found this

    echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
    sudo udevadm control --reload-rules && sudo udevadm trigger

    the first command seemed to work fine as I got this

    SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666

    but when I tried the second command I got this

    udevadm: missing or unknown comman

    so I checked if udev was installed on my Jetson I found some instructions here to get udev working

    where i found i tried to run

    udevadm monitor

    but it doesn't seem to help when I try to run the udevadm command.
    Do you know what I am missing?

    thanks!

      Hi SeanWorcester
      Make sure to also go through all the packages inside install_dependencies.sh. Seems like you are missing more than you said above. udev seems to be one of them.

      You should have `/etc/udev/rules.d/ folder present on your system. Then you will be able to write the rules to .rules file.

      udevadm is used to reload and apply the changes without the need to restart your PC.

      Thanks,
      Jaka

        Hello jakaskerl

        I did go though every package on install_dependencies.sh the only two i couldn't install were libdc1394-dev and libjasper-dev. I did a check line by line and everything else is accounted for and installed.

        I checked a working individual DepthAI install on another SDcard and I saw those same two packages were not installed there.

        Could there be something else I am missing?

        Thanks,

        Hello jakaskerl

        I compared the .rules folders of a standard install of DepthAI with the modified install that I put in my Yolov7 environment and honestly I don't see a difference.

        Do you know what rule I need to add to get the OAK-D-Lite working had and where?

        Thank you,

          Hi SeanWorcester
          If you are concerned about your current environment, you can make a backup: pip freeze >> backup.txt. That way you can run pip install -r backup.txt and recreate your env. I'd suggest running the install_dependencies as it doesn't change your environment, and then run install_requirements.py as well. If if breaks your current working environment, you can just restore it from the backup.

          I will check more on monday.

          Thanks,
          Jaka

            Hello jakaskerl

            I followed your suggestion and it works!! I can run the DepthAI and Yolov7 in the same environment and they don't seem to clash. I am able to run rgb_preview.py and the Yolov7 detect.py with no issues. Now I hope to use the OAK-D-Lite rgb camera for Yolov7 and reference my detection with the StereoDepth camera. Still need to fix this part out.

            Thank you for all the help you have given me.

            a month later

            Hi @jakaskerl

            Thank you for the reply!
            I read through the documentation for making a custom Yolov7 tiny model, and I have a couple of questions.

            First, after I create the Yolov7 tiny model and convert it to the BLOB format do call my model like this in Python?

            self._spatialDetectionNetwork.setBlobPath(BLOB_PATH)

            Where BLOB_PATH is the location of the new model on my Jetson Nano.

            Second, do I need to change anything on the OAK-D-Lite camera itself or run this new model?

            Thank You,

            Hi @SeanWorcester
            That's how you specify the path so the node knows which model to use, yes. Since it's a v7 model which iirc does not have anchors/masks, you typically don't need to change anything other than the input size (if different) and maybe the confidence threshold.

            Thanks,
            Jaka

              5 days later

              Hello jakaskerl

              I created a Yolov7-tiny model and made sure it worked with Yolov7 before converting it to a BLOB file. Then I modified the spatial_tiny_yolo.py example script. and called the new Yolov7-tiny BLOP file. I also changed the self._camRgb.setPreviewSize() to 640x480

              when I ran the modified example I got,

              I changed self._camRgb.setPreviewSize() from 640x480 to 640x640, and ran it again.

              my error changed to,

              Have you seen this issue before or know what else I need to change?

              Thank you for your help!

                Hello Matija and jakaskerl

                Did I create the model wrong? I looked at the the Notebook above and mostly followed the steps except in a few areas.

                I resized all my images to 640x480 before I prepared all my training and validation data.

                when ran train.py I used the following arguments.

                python train.py --batch 16 --cfg cfg/training/custom_yolov7-tiny.yaml --epochs 300 --data /content/yolov7/data/custom.yaml --weights 'yolov7-tiny.pt' --device 0

                Which I see defaults the image data to 640x640.

                Could this be my problem? Do I need to resize my images to 640x640 before preparing all my data?

                Thank you for all the help!

                Hi @SeanWorcester
                I don't see an explicit flag to set the input size like done in the notebook:

                !python train.py --epochs 2 --workers 8 --device 0 --batch-size 32 --data data/voc.yaml --img 640 640 --cfg cfg/training/yolov7_voc-tiny.yaml --weights 'yolov7-tiny.pt' --hyp data/hyp.scratch.tiny.yaml

                Did you miss changing the size somewere; including https://tools.luxonis.com?

                Thanks,
                Jaka

                  Hello jakaskerl

                  Correct I missed the --img flag, but I see that the default is 640x640. I have a couple of questions before I build my model again.

                  Should I resize my images to 640x640 before generating labels, or should I leave them at 3000x4000 before running train.py?

                  When I entered the best.py into the BLOB converter tool I specified the shape at 640. Should I do 640x640 this time?

                  Thank you so much for your help!

                    Hi @SeanWorcester
                    I'm not sure this is needed, but if 3000x4000 don't give you any errors, it's likely ok.

                    SeanWorcester When I entered the best.py into the BLOB converter tool I specified the shape at 640. Should I do 640x640 this time?

                    Either 640 or 640 640 is fine. But isn't the model 640 480?

                    Thanks,
                    Jaka

                      Hi @jakaskerl

                      I didn't set the -img flag so I believe the default is 640x640. I used images that were sized to 640x480 during training.

                      I am wondering what would happen if I set the training.py -img to 640x480, and created a new model; then set set preview() to 640x480.

                      Do you think this would work?

                      Thanks!