AliChoudhry
Which guide did you follow?

AliChoudhry From my understanding i need to figure out the SSH, but its not accepting the default password (raspberry) or my own password…

AliChoudhry OS seems to be working (Im typing this from there!)

Not sure what those two mean. If you SSHed/VNCed/Monitor-plugged into RPI, then depthai device should be recognized by your RPI. That is, if dependencies are installed.

Thanks,
Jaka

Thank you for the prompt reply. I'm on my 'main' computer now so can type a bit better.

  1. I followed this guide: https://docs.luxonis.com/hardware/platform/deploy/to-rpi/

I used the google drive to flash "OAK_CM4_POE_V10_64bit" onto a card. From my understanding this worked because when I then inserted the card into the pi and started it up I could view the desktop and browse the internet and such. I did use the RPi imager instead of the Baleana Etcher as the Etcher was giving me problems.

  1. I then tried to follow the instructions for this: https://docs.luxonis.com/hardware/platform/deploy/to-rpi/#SSH%20into%20the%20RPi

    I managed to get to the below command line (with the ASCII art!); but none of the passwords I tried worked. I've tried "raspberry" as indicated, as well as tried resetting the password.

The authenticity of host 'raspberrypi.local (192.168.1.222)' can't be established.
ECDSA key fingerprint is SHA256:stb5mbRQeX6veOq8Wzg81rz9IHonxJR2++Q8bDYryTo.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
  1. For transparency, I'm not very tech savvy so please excuse my ignorance-- but if I don't intend to use the pi/oak remotely then do I need to bother with SSH? The pi is connected to wifi and I am running a moniter with it via HDMI. I did try to run one of the tests to see if the camera is attached (I think it was in the deptha folder?) and the camera test passed. The camera seems to be connected; however I can't seem to run any NN/inference. For example I couldn't run the sample "age-gender" program in the examples, nor "tiny-yolo" in the examples.

My immediate goal is simply to try to run one of the example programs, such as "age gender" or "tiny yolo". I don't really need to access the pi remotely from another computer (but would like to maybe in a year or so when this project evolved a bit further).

Thank you.

    AliChoudhry I could view the desktop and browse the internet and such. I did use the RPi imager instead of the Baleana Etcher as the Etcher was giving me problems.

    Then you are already using the RPI. No further SSH connecting is needed.

    The device should work now.

    AliChoudhry I did try to run one of the tests to see if the camera is attached (I think it was in the deptha folder?) and the camera test passed.

    Good.

    AliChoudhry The camera seems to be connected; however I can't seem to run any NN/inference. For example I couldn't run the sample "age-gender" program in the examples, nor "tiny-yolo" in the examples.

    Any particular error you are experiencing? If ColorCamera/rgb_preview.py example runs, all other should as well.

    Thanks,
    Jaka

    I retested rgb_preview.py and it works. However, yolo gave this: depthai-python) ali@luxonis:~/depthai-python/examples/Yolo $ python tiny_yolo.py

    Using Tiny YoloV4 model. If you wish to use Tiny YOLOv3, call 'tiny_yolo.py yolo3'

    [19443010711A702700] [1.1] [1726551235.306] [host] [warning] Device crashed, but no crash dump could be extracted.

    Traceback (most recent call last):

    File "/home/ali/depthai-python/examples/Yolo/tiny_yolo.py", line 121, in <module>

    inRgb = qRgb.get()
    
            ^^^^^^^^^^

    RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'rgb' (X_LINK_ERROR)'

    If I can get depthai_SDK or depthai-experiemnts to work id be golden! (sorry typing on the pi again so its bad typing)

    another error: (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-emotion-recognition $ python3 main.py

    [2024-09-17 15:39:01] INFO [root.exit:328] Closing OAK camera

      AliChoudhry

      Hi Ali ! Please make sure that you are using the original USB type -C cable from the luxonis… You may plug out the camera from the Pi and the reconnect it and run the Programme it might help as well…

        I'm using the original cable that came with it. It was my hunch that it may be a power issue as well. I bought a powered USB hub but might need to fiddle with how it's arranged a bit more. Hopefully that does the trick. Thanks everyone.

        11 days later

        Alright, so update. I bought a Y-Splitter and that seems to have done the trick. I managed to run gen2-age-gender and gen2-emotion-recognition (separately) via depthai-experiments. Yay!

        My question, and any help is appreciated, is what is the easiest way to run a simple object detection model?
        I've trained the model on Ultralytics and have also managed to upload a copy to Roboflow.
        The model works on the Ultralytics app (on my phone) and also works for still images on the Roboflow website.

          4 days later

          So I got an error for a stereo camera component. I thought I could make spatial=False in main.py and then just rerun main.py but it then just kills itself?

          (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-yolo/device-decoding $ python3 main.py --config model/Ali3.json

          config: model/Ali3.json

          [2024-10-06 10:42:03] INFO [root.__exit__:328] Closing OAK camera

          Traceback (most recent call last):

          File "/home/ali/depthai/depthai-experiments/gen2-yolo/device-decoding/main.py", line 10, in <module>

          color = oak.create_camera('color')

          ^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/oak_camera.py", line 133, in create_camera

          comp = CameraComponent(self._oak.device,

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/camera_component.py", line 142, in __init__

          res = getClosesResolution(sensor, sensor_type, width=1300)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/camera_helper.py", line 197, in getClosesResolution

          for (res, size) in resolutions:

          ^^^^^^^^^^^

          TypeError: cannot unpack non-iterable NoneType object

          Sentry is attempting to send 2 pending error messages

          Waiting up to 2 seconds

          Press Ctrl-C to quit

          And this is the code that came with main.py in gen2-yolo device decoding:

          from depthai_sdk import OakCamera, ArgsParser

          import argparse

          # parse arguments

          parser = argparse.ArgumentParser()

          parser.add_argument("-conf", "--config", help="Trained YOLO json config path", default='model/yolo.json', type=str)

          args = ArgsParser.parseArgs(parser)

          with OakCamera(args=args) as oak:

          color = oak.create_camera('color')
          
          nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=True)
          
          oak.visualize(nn, fps=True, scale=2/3)
          
          oak.visualize(nn.out.passthrough, fps=True)
          
          oak.start(blocking=True)

            AliChoudhry
            Set a resolution manually to fix:

            for (res, size) in resolutions:

            Thanks ,
            Jaka

            So that would look like this?

            from depthai_sdk import OakCamera, ArgsParser
            import argparse
            # parse arguments

            parser = argparse.ArgumentParser()
            parser.add_argument("-conf", "--config", help="Trained YOLO json config path", default='model/yolo.json', type=str)
            args = ArgsParser.parseArgs(parser)

            with OakCamera(args=args) as oak:
            color = oak.create_camera('color', resolution=(640, 480))
            nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=True)
            oak.visualize(nn, fps=True, scale=2/3)
            oak.visualize(nn.out.passthrough, fps=True)
            oak.start(blocking=True)

              AliChoudhry
              Yep. Make sure version is 1.15 and spatial=False since you are using oak-1

              Thanks,
              Jaka

              a month later

              How do I change the version?

              (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-yolo/device-decoding $ python main.py

              config: model/yolo.json

              Traceback (most recent call last):

              File "/home/ali/depthai/depthai-experiments/gen2-yolo/device-decoding/main.py", line 11, in <module>

              nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=False)

              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

              File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/oak_camera.py", line 323, in create_nn

              comp = NNComponent(self.device,

              ^^^^^^^^^^^^^^^^^^^^^^^^

              File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 108, in __init__

              self._parse_model(model)

              File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 240, in _parse_model

              self._parse_config(model)

              File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 286, in _parse_config

              with model_config.open() as f:

              ^^^^^^^^^^^^^^^^^^^

              File "/usr/lib/python3.11/pathlib.py", line 1045, in open

              return io.open(self, mode, buffering, encoding, errors, newline)

              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

              FileNotFoundError: [Errno 2] No such file or directory: 'model/yolo.json'

              Sentry is attempting to send 2 pending error messages

              Waiting up to 2 seconds