L
leeor

  • May 11, 2024
  • Joined Jan 30, 2024
  • 0 best answers
  • jakaskerl
    I'll try to test the trace command you gave, thanks!
    I also removed all the logic from my code and just tested the loop (with displaying the image), the FPS still was about 12-13 😐

    What is the recommended size for v8? Is it 320x320?
    I want to show the image at the end, and when I tried 320x320 size (the FPS was about 19-20), it was too small to see anything.
    How can I do the detections on a smaller size but then increase the output result (with bounding box, etc.) to be at least 640 (and still have a good quality)? I saw this link, but it looks like it does the opposite (from a larger image to smaller?) Or which one of the options is more relevant?

    @jakaskerl Last question, since I do transfer learning, and on the Luxonis site to convert weights to blobs I saw only YOLO versions, I limit myself to it. Is there an option to use different models? (will need to train and somehow covert to .blob)

    • Trying to understand the FPS limit using Yolo v8 nano to do counting/tracking of objects.

      I'm running a script that basically counts objects once they cross a line.
      I'm using (custom - 1 class) YOLOv8 nano. I'm only able to get about 12-13 FPS, and I want to know how much I can aim to improve it.
      Technical:
      I'm using a

      In addition, the Python script runs several other things (including writing to a file once in a while, etc.)

      I would appreciate any suggestions on how to improve it or what I can expect as the FPS limit.

    • AnhTuNguyen Hi!
      Can you send images/explain how you trained the model with a mono cam?

      I've collected data using the mono cam, but I don't know how to train the model (yolo v8), and later how to deploy it back to the camera?
      Did you use tools.ultralytics to convert it to .blob?

      Thanks!

    • Thanks, Jaka.
      So, should the input frames be linked to detection and tracking?

      Other than that, do you see any other issues?

    • Hello,

      I try to run object detection and tracking on a video using a custom-trained Yolo architecture.

      I've followed several tutorials I found online, but I can't make it work. I believe I miss something with the linking. I've simplified the code posted here to focus on what I believe is more important, but I can share more as needed.
      I'm using the OAK - 1 POE


      # Create pipeline
      pipeline = dai.Pipeline()
      objectTracker = pipeline.create(dai.node.ObjectTracker)
      detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
      
      xinFrame = pipeline.create(dai.node.XLinkIn)
      trackerOut = pipeline.create(dai.node.XLinkOut)
      xlinkOut = pipeline.create(dai.node.XLinkOut)
      nnOut = pipeline.create(dai.node.XLinkOut)
      
      xinFrame.setStreamName("inFrame")
      xlinkOut.setStreamName("trackerFrame")
      trackerOut.setStreamName("tracklets")
      nnOut.setStreamName("nn")
      
      # Network specific settings
      detectionNetwork.setBlobPath(nnPath)
      # … other network stuff here
      
      
      objectTracker.setDetectionLabelsToTrack([0]) # track only class 0
      objectTracker.setTrackerType(dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM)
      objectTracker.setTrackerIdAssignmentPolicy(dai.TrackerIdAssignmentPolicy.UNIQUE_ID)
      objectTracker.setTrackerThreshold(0.5)
      
      # Linking
      xinFrame.out.link(objectTracker.inputTrackerFrame) # frames go to tracker object
      detectionNetwork.out.link(nnOut.input) # detected info goes to output NN Q
      detectionNetwork.out.link(objectTracker.inputDetections) # detected info goes to tracker object input port
      detectionNetwork.passthrough.link(objectTracker.inputDetectionFrame) # (img) frames pass through the the detection node to go to tracker node
      objectTracker.out.link(trackerOut.input) # tracked info goes to tracker out Q
      objectTracker.passthroughTrackerFrame.link(xlinkOut.input) # frames pass through the tracker to go to xlink out Q
      
      def isValid(x): return "None" if not x else "Valid"
      
      with dai.Device(pipeline) as device: 
        print('Connected cameras: ', device.getConnectedCameras())
        qIn = device.getInputQueue(name="inFrame") # Input queue for video frames 
        trackerFrameQ = device.getOutputQueue(name="trackerFrame", maxSize=4) # output queue for track Info ?? 
        tracklets = device.getOutputQueue(name="tracklets", maxSize=4) # output queue for track Info ?? 
        qDet = device.getOutputQueue(name="nn", maxSize=4) # output queue for detection info
        detections = [] 
        frame = None 
        video_path = "./plain_video.mp4"
        cap = cv2.VideoCapture(video_path) 
        while cap.isOpened(): 
          valid, frame = cap.read() if not valid: 
            print("failed to read frame from video") 
            break
         
          img = dai.ImgFrame() 
          img.setData(frame) 
          img.setType(dai.ImgFrame.Type.BGR888p) 
          img.setHeight(640) 
          img.setWidth(640)
      
          trackFrame = trackerFrameQ.tryGet() 
          inDet = qDet.tryGet() 
          track = tracklets.tryGet() 
          if trackFrame is None or track is None or inDet is None: continue
      
          print("got something!") # I never get to this point here!

      I never get to the "got something" print. If I change the 'tryGet' to 'get' it just hangs there forever.
      I assume it doesn't receive the required images to the queue, that's why it waits but I couldn't figure where my linking is wrong. I've tried so many different options. Would appreciate any help

      DepthAI
      #oak

    • Interesting, I'll check into it.
      Thanks!

    • Thanks @jakaskerl and @Priyam26
      I tried the 5 shave, it didn't impact the performance.

      Another question if you might know.
      I have 2 cameras in two different places, but other than that, they are pretty much similar. One is getting 14-15 FPS the second is 8-9. Also, my connection to the host there is very slow when I scp a file there, it takes longer than another cam I have (the 14-15 fps takes much less than the one with the 8-9). Also, running the bandwidth test, the camera with the 15 fps returned about 990, while the one with the bad connection did only about 90.

      Is there any connection between these facts and the lower FPS? All the processing is done on the camera, so I'm not sure if there is a connection or just a coincidence.

      • Priyam26 Hi!
        I appreciate the feedback!

        Currently, it is using a PoE cable (which last night I learned it's not ideal). However, I use Oak - 1 PoE so I'm not sure if there is any other option for me.

        Also, it's using 6 shave, I'll look into that as well, thanks., I heard from my friend that when he changed the shave values from 6 he got error, but I'll try.

      • jakaskerl Thank you for the feedback.
        Is there a way that you know of to train a model for this edge device (such as tiny yolo, mobilenet, or anything else)?

        I will try to use the v6 today, but if you are aware of other models that can be trained on a custom dataset and used on the OAK, I'd be happy to know which ones!

        • Hello,

          I'm looking for a better solution to do object detection/tracking on a custom dataset.

          I'm currently running YoloV8 nano on the Oak camera, and I get about 13-14 FPS. The camera is connected to a Pi (model 4-B).

          I checked other examples from the depthai-python/examples repo, mainly the MobileNet SSD, and Tiny YoloV4, and they run at 30+ FPS. However, both models seem to be deprecated/old, and I couldn't find a way to convert the weights file to the required .blob one (https://tools.luxonis.com/ seems to support only Yolo V5 and newer)

          Are there other supported object detection models?
          Any (up-to-date) resource to get the correct weights file from the older models (both tiny yolo and Mobilenet used TF 1.X, I train everything in collab, and they stopped supporting TF 1.x)?
          Lastly, less preferred, will changing the Pi to something else, such as Jetson will help in any way?

          DepthAIMachine Learning #object-detection #fps