Babacar

  • Sep 25, 2023
  • Joined Jun 24, 2023
  • 0 best answers
  • Hi jakaskerl

    Do you have any idea how to use it with this code: https://github.com/luxonis/depthai-experiments/blob/master/gen2-deepsort-tracking/main.py?

    I tried replacing the yolov6 model with my blob file, but I received this error:

    File "/home/pi/Tracking/gen3-deepsort-tracking/main.py", line 43, in <module>
    oak.start(blocking=True)
    File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/oak_camera.py", line 347, in start
    self.build()
    File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/oak_camera.py", line 465, in build
    xouts = out.setup(self.pipeline, self.oak.device, names)
    File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/classes/output_config.py", line 54, in setup
    xoutbase: XoutBase = self.output(pipeline, device)
    File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/components/nn_component.py", line 629, in main
    out = XoutTwoStage(det_nn=self.
    comp.input,
    File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/oak_outputs/xout/xout_nn.py", line 289, in init
    self.whitelist_labels: Optional[List[int]] = second_nn.
    multi_stage_nn.whitelist_labels
    AttributeError: 'NoneType' object has no attribute 'whitelist_labels'
    Sentry is attempting to send 2 pending error messages
    Waiting up to 2 seconds
    Press Ctrl-C to quit~

    I appreciate your help.

    • Hi erik

      I've trained my model and deployed it on Roboflow. Following the tutorial, I modified the main.py code:

      import cv2
      from depthai_sdk import OakCamera
      from depthai_sdk.classes.packets import TwoStagePacket
      from depthai_sdk.visualize.configs import TextPosition
      from deep_sort_realtime.deepsort_tracker import DeepSort
      
      tracker = DeepSort(max_age=1000, nn_budget=None, embedder=None, nms_max_overlap=1.0, max_cosine_distance=0.2)
      
      def cb(packet: TwoStagePacket):
          detections = packet.img_detections.detections
          vis = packet.visualizer
          # Update the tracker
          object_tracks = tracker.iter(detections, packet.nnData, (640, 640))
      
          for track in object_tracks:
              if not track.is_confirmed() or \
                  track.time_since_update > 1 or \
                  track.detection_id >= len(detections) or \
                  track.detection_id < 0:
                  continue
      
              det = packet.detections[track.detection_id]
              vis.add_text(f'ID: {track.track_id}',
                              bbox=(*det.top_left, *det.bottom_right),
                              position=TextPosition.MID)
          frame = vis.draw(packet.frame)
          cv2.imshow('DeepSort tracker', frame)
      
      
      with OakCamera() as oak:
          color = oak.create_camera('color')
          model_config = {
                  'source': 'roboflow', 
                  'model':'usv-7kkhf/4',
                  'key':'zzzzzzzzzzzzzzz' # FAKE Private API key
          }
          yolo = oak.create_nn(model_config,color)
          embedder = oak.create_nn('mobilenetv2_imagenet_embedder_224x224', input=yolo)
      
          oak.visualize(embedder, fps=True, callback=cb)
          # oak.show_graph()
          oak.start(blocking=True)

      However, I'm encountering an error stating that it can't find my trained model:

      Exception: {'message': 'No trained model was found.', 'type': 'GraphMethodException', 'hint': 'You must train a model on this version with Roboflow Train before you can use inference.', 'e': ['Model not found, looking for filename 4JiY9CSQUUctWZgCzw210yo9qcw2/heRJlafm8KwTDQrTn8dI/4/roboflow.zip']}

      Sentry is attempting to send 2 pending error messages

      So, I saved my file as best.py and then used the model converter. I'd like to know how to implement it into the code:

      Thanks for your assistance.

      • Hi erik

        Thank you, I am currently training my model, afterwards I will try with the Deep SORT demo.

      • Hi Jaka,

        I think there might have been a misunderstanding in our last exchange. I intend to train my YOLOv8 model using this code: https://github.com/luxonis/depthai-ml-training/blob/master/colab-notebooks/YoloV8_training.ipynb, and then import it in JSON format, as indicated in the tutorial.

        I plan on using this specific DeepSORT repository from Luxonis

        and I would like to, instead of launching it with yolov6.json, do it with a yolov8 that I have trained on my own database.

        Furthermore, I'm not quite sure about the "deepsort/deepsort/detection" directory you mentioned. I don't see the yolov4.cfg and yolov4.weights files.

        Could you provide more clarification on this?

        Best regards,

        Babacar

        • erik replied to this.
        • jakaskerl

          Following your advice, I've made some further modifications to my code and have also removed the video writing part. The changes have resulted in considerable improvements in the performance. However, the time taken per iteration now varies widely. Here's a subset of the results:

          Elapsed time for iteration: 2.3365020751953125e-05 seconds

          Elapsed time for iteration: 2.3603439331054688e-05 seconds

          ...

          ...

          Elapsed time for iteration: 2.6702880859375e-05 seconds

          Elapsed time for iteration: 2.3603439331054688e-05 seconds

          Elapsed time for iteration: 3.361701965332031e-05 seconds

          Elapsed time for iteration: 2.4080276489257812e-05 seconds

          ...

          ...

          Elapsed time for iteration: 0.00014281272888183594 seconds

          Elapsed time for iteration: 0.06066274642944336 seconds

          Elapsed time for iteration: 0.05930662155151367 seconds

          Elapsed time for iteration: 0.05977463722229004 seconds

          Elapsed time for iteration: 0.06491947174072266 seconds

        • Hi jakaskerl

          Here's the code that I implemented:

          import time

          # ...

          while True:

          # Begin timing
          
          start_time = time.time()
          
          for name, q in queues.items():
          
              # Add all msgs (color frames, object detections and recognitions) to the Sync class.
          
              if q.has():
          
                  sync.add_msg(q.get(), name)
          
          msgs = sync.get_msgs()
          
          if msgs is not None:
          
              frame = msgs["color"].getCvFrame()
          
              detections = msgs["detection"].detections
          
              embeddings = msgs["embedding"]
          
              # Write raw frame to the raw_output video
          
              raw_out.write(frame)
          
              # Update the tracker
          
              object_tracks = tracker_iter(detections, embeddings, tracker, frame)
          
              # For each tracking object
          
              for track in object_tracks:
          
                  #... All existing code 
          
              # Write the frame with annotations to the output video
          
              out.write(frame)
          
          # End timing and print elapsed time
          
          end_time = time.time()
          
          elapsed_time = end_time - start_time
          
          print(f"Elapsed time for iteration: {elapsed_time} seconds")

          raw_out.release()

          out.release()

          These are the results I got:

          Elapsed time for iteration: 0.13381719589233398 seconds

          Elapsed time for iteration: 0.1333160400390625 seconds

          Elapsed time for iteration: 0.13191676139831543 seconds

          ...

          ...

          Elapsed time for iteration: 0.13199663162231445 seconds

          Thanks, Jaka, for your input so far.I would appreciate any further suggestions you might have to fix this issue.

        • jakaskerl

          Hi Jaka,

          Apologies for the delay in response. I want to confirm whether this is the correct modification to the code that you requested:

          import depthai as dai

          import numpy as np

          import time

          # Create pipeline

          pipeline = dai.Pipeline()

          pipeline.setXLinkChunkSize(0)

          # Define source and output

          camRgb = pipeline.create(dai.node.ColorCamera)

          camRgb.setFps(60)

          camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)

          xout = pipeline.create(dai.node.XLinkOut)

          xout.setStreamName("out")

          camRgb.isp.link(xout.input)

          # Connect to device and start pipeline

          with dai.Device(pipeline) as device:

          print(device.getUsbSpeed())

          q = device.getOutputQueue(name="out")

          diffs = np.array([])

          while True:

          start_time = time.time() # Record start time of the loop

          imgFrame = q.get()

          latencyMs = (dai.Clock.now() - imgFrame.getTimestamp()).total_seconds() * 1000

          diffs = np.append(diffs, latencyMs)

          print('Latency: {:.2f} ms, Average latency: {:.2f} ms, Std: {:.2f}'.format(latencyMs, np.average(diffs), np.std(diffs)))

          end_time = time.time() # Record end time of the loop

          loop_time = (end_time - start_time) * 1000 # Calculate loop time in ms

          print('Loop time: {:.2f} ms'.format(loop_time))

          Please let me know if this is correct, or if there are any further changes that I should make.

          Thanks,
          Babacar

          • Hi jakaskerl

            Thank you for your previous insights. I want to clarify that the latency measurements I shared with you earlier were taken without showing the preview (I had commented out `cv2.imshow('frame', imgFrame.getCvFrame())`).

            After including the preview display in the computation, here are the new values I obtained:

            Latency: 481.45 ms, Average latency: 527.23 ms, Std: 36.31

            Latency: 492.62 ms, Average latency: 527.20 ms, Std: 36.31

            Latency: 488.96 ms, Average latency: 527.18 ms, Std: 36.31

            Latency: 486.37 ms, Average latency: 527.15 ms, Std: 36.31

            Latency: 496.27 ms, Average latency: 527.13 ms, Std: 36.31

            Latency: 492.14 ms, Average latency: 527.10 ms, Std: 36.31

            Latency: 503.84 ms, Average latency: 527.09 ms, Std: 36.30

            Latency: 515.83 ms, Average latency: 527.08 ms, Std: 36.29

            Latency: 507.38 ms, Average latency: 527.07 ms, Std: 36.28

            Latency: 507.18 ms, Average latency: 527.05 ms, Std: 36.27

            Latency: 498.62 ms, Average latency: 527.03 ms, Std: 36.27

            Latency: 515.08 ms, Average latency: 527.02 ms, Std: 36.25

            As you can see, adding the preview display significantly increases the latency.

            Best,

            Babacar

            • jakaskerl

              Here are the results I obtained by running the code on my Raspberry Pi via an SSH connection:

              • Latency: 108.63 ms, Average latency: 103.88 ms, Standard deviation: 10.33

              • Latency: 112.14 ms, Average latency: 103.90 ms, Standard deviation: 10.32

              • Latency: 115.76 ms, Average latency: 103.92 ms, Standard deviation: 10.33

              • Latency: 99.22 ms, Average latency: 103.91 ms, Standard deviation: 10.32

              • Hi jakaskerl

                But how to launch the program without connecting via SSH? By opening the camera and plugging in an HDMI cable? I tried that, but I'm not receiving any image on my screen.

              • jakaskerl

                I am using an Oak-D PoE CM4 camera, so all the processing is done directly on the Raspberry Pi board. To measure the latency, I view the camera feed on the Raspberry Pi by connecting via SSH from my computer.

                • Hi,

                  I'm currently working on a project using the OAK-D POE CM4 camera, and I'm encountering latency issues when using the DeepSORT algorithm in real-time. The latency is around 9 seconds, which is far too high for my project's needs.

                  I have an NVIDIA Jetson TX2 platform that I'd like to use to speed up the processing of the algorithm. However, I'm unsure how I can integrate it given that the OAK-D POE CM4 camera is integrated with a Raspberry Pi CM4 board.

                  Do you have any suggestions on how I could share processing tasks between the Raspberry Pi and the Jetson TX2, or any general recommendations on how I might reduce the latency of my system?

                  Thanks in advance for your help, and I look forward to your guidance.

                  Best Regards,

                  Babacar

                  • Hi jakaskerl

                    Yes, I have enabled X11 forwarding using the command `ssh pi@luxonis.local -X`. It was working fine on my MacBook with Xquartz. However, since I switched to Windows, I've been having difficulties resolving the issue, and I can't use Xquartz on Windows.

                    Do you have any other suggestions for resolving this issue with initializing the GTK backend with OpenCV on Windows?

                    Thank you,

                    Babacar

                  • Hello everyone,

                    I'm trying to run a Python program that uses OpenCV to display real-time video output. However, I'm encountering an error during the initialization of the GTK backend with OpenCV. Here's the complete error message I'm receiving:

                    ```

                    cv2.error: OpenCV(4.6.0) /tmp/pip-wheel-u79916uk/opencv-python_ea2489746b3a43bfb3f2b5331b7ab47a/opencv/modules/highgui/src/window_gtk.cpp:635: error: (-2:Unspecified error) Can't initialize GTK backend in function 'cvInitSystem'

                    ```

                    I've tried several solutions, including exporting the `DISPLAY=:0.0` environment variable and allowing X access with the `xhost +` command. Unfortunately, none of these measures have resolved the issue.

                    I'm using a Windows PC and connecting to the Raspberry Pi of the OAK-D CM4 camera via SSH.

                    If anyone has encountered this problem before or has any suggestions to resolve it, I would greatly appreciate any help you can provide.

                    Thank you in advance!

                    Babacar

                    • erik

                      Thank you very much for pointing me to the DeepSORT demo. This will be immensely helpful for my project.

                      Best regards!

                    • Hello everyone,

                      I have developed an object detection and tracking application on my PC, utilizing YOLOv8 for detection and Deep SORT for tracking. My application draws its own bounding boxes and works perfectly with my webcam. Now, I am looking to integrate it with my OAK-D PoE CM4 camera.

                      For detection, I've seen how to convert my YOLOv8 model using your tools, and I plan to use this converted model for object detection with the DepthAI SDK.

                      However, for tracking, I'm a bit uncertain. I've seen your object tracking example (https://github.com/luxonis/depthai/blob/main/depthai_sdk/examples/NNComponent/object_tracking.py), but it does not utilize Deep SORT. I want to use my own Deep SORT tracking algorithm, but I'm unsure of the best way to integrate it into my application's image processing pipeline.

                      1. Is it possible to use Deep SORT as a tracking algorithm with the DepthAI SDK? If so, are there any examples or guides available on how to do this?

                      2. If using Deep SORT is not feasible, what would be the best tracking algorithm to use with the DepthAI SDK to accurately obtain the position of tracked objects?

                      3. Is it possible to combine the use of the DepthAI SDK for object detection with Deep SORT for tracking? For instance, could I use the DepthAI SDK to detect objects, and then pass these detected objects to Deep SORT for tracking?

                      I would greatly appreciate any help or guidance you can provide.

                      Best regards,

                      Babacar

                      • erik replied to this.
                      • erik

                        I initially didn't use a preconfigured RPi image, but I reinstalled it, and now it works perfectly fine. Thank you!