AliChoudhry

  • Nov 12, 2024
  • Joined Aug 7, 2023
  • 0 best answers
  • How do I change the version?

    (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-yolo/device-decoding $ python main.py

    config: model/yolo.json

    Traceback (most recent call last):

    File "/home/ali/depthai/depthai-experiments/gen2-yolo/device-decoding/main.py", line 11, in <module>

    nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=False)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/oak_camera.py", line 323, in create_nn

    comp = NNComponent(self.device,

    ^^^^^^^^^^^^^^^^^^^^^^^^

    File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 108, in __init__

    self._parse_model(model)

    File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 240, in _parse_model

    self._parse_config(model)

    File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/nn_component.py", line 286, in _parse_config

    with model_config.open() as f:

    ^^^^^^^^^^^^^^^^^^^

    File "/usr/lib/python3.11/pathlib.py", line 1045, in open

    return io.open(self, mode, buffering, encoding, errors, newline)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    FileNotFoundError: [Errno 2] No such file or directory: 'model/yolo.json'

    Sentry is attempting to send 2 pending error messages

    Waiting up to 2 seconds

    • So that would look like this?

      from depthai_sdk import OakCamera, ArgsParser
      import argparse
      # parse arguments

      parser = argparse.ArgumentParser()
      parser.add_argument("-conf", "--config", help="Trained YOLO json config path", default='model/yolo.json', type=str)
      args = ArgsParser.parseArgs(parser)

      with OakCamera(args=args) as oak:
      color = oak.create_camera('color', resolution=(640, 480))
      nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=True)
      oak.visualize(nn, fps=True, scale=2/3)
      oak.visualize(nn.out.passthrough, fps=True)
      oak.start(blocking=True)

      • And this is the code that came with main.py in gen2-yolo device decoding:

        from depthai_sdk import OakCamera, ArgsParser

        import argparse

        # parse arguments

        parser = argparse.ArgumentParser()

        parser.add_argument("-conf", "--config", help="Trained YOLO json config path", default='model/yolo.json', type=str)

        args = ArgsParser.parseArgs(parser)

        with OakCamera(args=args) as oak:

        color = oak.create_camera('color')
        
        nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=True)
        
        oak.visualize(nn, fps=True, scale=2/3)
        
        oak.visualize(nn.out.passthrough, fps=True)
        
        oak.start(blocking=True)
        • (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-yolo/device-decoding $ python3 main.py --config model/Ali3.json

          config: model/Ali3.json

          [2024-10-06 10:42:03] INFO [root.__exit__:328] Closing OAK camera

          Traceback (most recent call last):

          File "/home/ali/depthai/depthai-experiments/gen2-yolo/device-decoding/main.py", line 10, in <module>

          color = oak.create_camera('color')

          ^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/oak_camera.py", line 133, in create_camera

          comp = CameraComponent(self._oak.device,

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/camera_component.py", line 142, in __init__

          res = getClosesResolution(sensor, sensor_type, width=1300)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

          File "/home/ali/depthai/depthai-env/lib/python3.11/site-packages/depthai_sdk/components/camera_helper.py", line 197, in getClosesResolution

          for (res, size) in resolutions:

          ^^^^^^^^^^^

          TypeError: cannot unpack non-iterable NoneType object

          Sentry is attempting to send 2 pending error messages

          Waiting up to 2 seconds

          Press Ctrl-C to quit

        • So I got an error for a stereo camera component. I thought I could make spatial=False in main.py and then just rerun main.py but it then just kills itself?

        • Alright, so update. I bought a Y-Splitter and that seems to have done the trick. I managed to run gen2-age-gender and gen2-emotion-recognition (separately) via depthai-experiments. Yay!

          My question, and any help is appreciated, is what is the easiest way to run a simple object detection model?
          I've trained the model on Ultralytics and have also managed to upload a copy to Roboflow.
          The model works on the Ultralytics app (on my phone) and also works for still images on the Roboflow website.

          • I'm using the original cable that came with it. It was my hunch that it may be a power issue as well. I bought a powered USB hub but might need to fiddle with how it's arranged a bit more. Hopefully that does the trick. Thanks everyone.

          • another error: (depthai-env) ali@luxonis:~/depthai/depthai-experiments/gen2-emotion-recognition $ python3 main.py

            [2024-09-17 15:39:01] INFO [root.exit:328] Closing OAK camera

            • I retested rgb_preview.py and it works. However, yolo gave this: depthai-python) ali@luxonis:~/depthai-python/examples/Yolo $ python tiny_yolo.py

              Using Tiny YoloV4 model. If you wish to use Tiny YOLOv3, call 'tiny_yolo.py yolo3'

              [19443010711A702700] [1.1] [1726551235.306] [host] [warning] Device crashed, but no crash dump could be extracted.

              Traceback (most recent call last):

              File "/home/ali/depthai-python/examples/Yolo/tiny_yolo.py", line 121, in <module>

              inRgb = qRgb.get()
              
                      ^^^^^^^^^^

              RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'rgb' (X_LINK_ERROR)'

              If I can get depthai_SDK or depthai-experiemnts to work id be golden! (sorry typing on the pi again so its bad typing)

            • Thank you for the prompt reply. I'm on my 'main' computer now so can type a bit better.

              1. I followed this guide: https://docs.luxonis.com/hardware/platform/deploy/to-rpi/

              I used the google drive to flash "OAK_CM4_POE_V10_64bit" onto a card. From my understanding this worked because when I then inserted the card into the pi and started it up I could view the desktop and browse the internet and such. I did use the RPi imager instead of the Baleana Etcher as the Etcher was giving me problems.

              1. I then tried to follow the instructions for this: https://docs.luxonis.com/hardware/platform/deploy/to-rpi/#SSH%20into%20the%20RPi

                I managed to get to the below command line (with the ASCII art!); but none of the passwords I tried worked. I've tried "raspberry" as indicated, as well as tried resetting the password.

              The authenticity of host 'raspberrypi.local (192.168.1.222)' can't be established.
              ECDSA key fingerprint is SHA256:stb5mbRQeX6veOq8Wzg81rz9IHonxJR2++Q8bDYryTo.
              Are you sure you want to continue connecting (yes/no/[fingerprint])?
              1. For transparency, I'm not very tech savvy so please excuse my ignorance-- but if I don't intend to use the pi/oak remotely then do I need to bother with SSH? The pi is connected to wifi and I am running a moniter with it via HDMI. I did try to run one of the tests to see if the camera is attached (I think it was in the deptha folder?) and the camera test passed. The camera seems to be connected; however I can't seem to run any NN/inference. For example I couldn't run the sample "age-gender" program in the examples, nor "tiny-yolo" in the examples.

              My immediate goal is simply to try to run one of the example programs, such as "age gender" or "tiny yolo". I don't really need to access the pi remotely from another computer (but would like to maybe in a year or so when this project evolved a bit further).

              Thank you.

              • I have the raspberry pi on which Im trying to run the oak 1.

                I have managed to flash depthai onto a drive and the OS seems to be working (Im typing this from there!)

                But I cant seem to get the camera to run. From my understanding i need to figure out the SSH, but its not accepting the default password (raspberry) or my own password…

                Ive also got a USB hub with its own power (which I thought might have been an issue) but the oak wont be recognized when pluged intothat.

                • I'm seeking clarity on workflow.

                  For context, I'm working with an object detection model.

                  To begin, I annotated my dataset in Roboflow.

                  This I was successfully able to train in Ultralytics HUB on Yolo8.

                  From here, I uploaded my .pt file to https://tools.luxonis.com/ which allowed me to download a zip file which included bin, xml, blob, and onnx files.

                  What is the easiest way to deploy these onto the oak?

                  • Alternatively, I'm not sure if this is something Luxonis offers but is there anyway to maybe troubleshoot this one-on-one with someone? Keen to know if there is a payment structure for that in place or how much it'd cost?

                    • I'm a bit unsure if I'm meant to use https://tools.luxonis.com/ or http://blobconverter.luxonis.com/.

                      Advice on proper workflow would be great.

                      When I put the best.pt I get from training into tools, I get the following:

                      Alternatively, if I upload my onnx to blobconverter, I do get a blob. But I'm unsure what parameters I'm mean to put in and how to get a json file?

                    • I'm very new to computer vision so am just figuring out the workflow. Any help would be appeciated. As I'm still figuring out the workflow, I'm only making a very simple model that can detect lemons.

                      I uploaded my images to Roboflow, generated the train/test/val dataset of these, and downloaded the zip for the Yolov8.

                      From here, I followed the steps here: https://docs.ultralytics.com/usage/python/

                      The order for codes I ran was as follows.

                      from ultralytics import YOLO
                      model = YOLO('yolov8n.yaml')

                      model.train(data='path/to/data.yaml', epochs=100)
                      model.val() # It'll automatically evaluate the data you trained.

                      From here, I ran prediction on the best.pt and was happy enough with the results.

                      from ultralytics import YOLO
                      # Load a model

                      model = YOLO('path/to/best.pt') # pretrained YOLOv8n model
                      # Run batched inference on a list of images
                      results = model(['path/to/images.jpeg',], save=True) # return a generator of Results objects
                      # Process results generator
                      for result in results:
                      boxes = result.boxes # Boxes object for bbox outputs

                      Next, I converted the file to an ONNX format with the following code:

                      from ultralytics import YOLO
                      model = YOLO('path/to/best.pt')
                      model.export(format='onnx')

                      My understanding is I can simply upload an onnx to http://blobconverter.luxonis.com/
                      But this wasn't working so I'm assuming I did something wrong?
                      So I tried this script which wasn't any better:

                      import blobconverter

                      blob_path = blobconverter.from_onnx(    model="path/to/best.onnx",    data_type="FP16",    shaves=5,)

                      Any help on what to do next would be appreciated!

                      • Hello! New to the world of computing/programming so please excuse me if any information is missing.

                        So I started with a set of generated images on Roboflow as a train/val/test dataset or annotated images.
                        These I trained on Ultralytics and then evaluated.
                        From here, I was able to output to a variety of file formats including .pt, .onnx, and .xml/.bin

                        I found the .onnx output was the easiet with this code:

                        `import blobconverter

                        blob_path = blobconverter.from_onnx(
                        model="/path/to/model.onnx",
                        data_type="FP16",
                        shaves=5,
                        )`

                        The model I'm trying to make is a simple object detection model but I'm a little stuck now. What do I do with the .blob file to run it on the Luxonis Oak-1?

                        Any help would be appreciated.