Hi community,

I am working on depthai-experiments/gen2-cumulative-object-counting at master · luxonis/depthai-experiments · GitHub with my custom model. the detection and counting are perfectly working when the input image is selected from the camera. The problem occurs when I want to use a video from host, the detection does not work anymore and I am getting random objects.
I read the following discussion where you solved the problem using a passthrough frame: Detection problem with Mobile Net on video from host - Luxonis Forum.
I tried to apply the same concept but it is not working since we are already using the passthrough to the object tracker in the case of cumulative object counting. How can I solve this issue and make the detection works well from the host video ?

Thank you very much

    Hi martin181818
    I think the discussion you are linking to solves the problem of syncing the results. Is this the only problem you are experiencing? As I understand, the NN is returning totally wrong results, not just delayed ones? Can you confirm?

    Thanks,
    Jaka

      jakaskerl

      Yes exactly it is returning wrong results. I tried to use the passthrough as I saw in the discussion but it didn't work since there exist already another passthrough to link the object Tracker. ( because when I use only the detection it works fine but with detection + object tracker it is not working anymore)
      Thank you

      jakaskerl

      It is still not working.
      I am using this definition for linking:

      nnOut = pipeline.create(dai.node.XLinkOut)

      nnOut.setStreamName("nn")

      nn.out.link(nnOut.input)
      nnPass = pipeline.create(dai.node.XLinkOut)

      nnPass.setStreamName("pass")

      nn.passthrough.link(nnPass.input)
      objectTracker=pipeline.create(dai.node.ObjectTracker)

      objectTracker.setTrackerType(dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM)

      objectTracker.setTrackerIdAssignmentPolicy(dai.TrackerIdAssignmentPolicy.SMALLEST_ID)
      nn.passthrough.link(objectTracker.inputTrackerFrame)

      nn.passthrough.link(objectTracker.inputDetectionFrame)

      nn.out.link(objectTracker.inputDetections)

      trackerOut=pipeline.create(dai.node.XLinkOut)

      trackerOut.setStreamName("tracklets")

      objectTracker.out.link(trackerOut.input)

      then I created the queues

      qIn = device.getInputQueue(name="inFrame")

      qDet = device.getOutputQueue(name="nn", maxSize=6, blocking=True)

      qPass = device.getOutputQueue("pass")

      tracklets = device.getOutputQueue("tracklets", 4, False)

      and finally to get the frame:

      frame = qPass.get().getCvFrame()

      Still it is not working when I try to use the tracker. How can I solve it ? Thank you for all your support

        Hi martin181818
        I don't see the link from video to NN node. Could you please add a MRE (maybe swap your model with one that is already available in depthai_experiments)?

        Thanks,
        Jaka

          jakaskerl

          YEs sorry my bad I did it copy it with the code but I am using it.
          xinFrame = pipeline.create(dai.node.XLinkIn)

          xinFrame.setStreamName("inFrame")

          xinFrame.out.link(nn.input)

          I am using exactly the same code from the maim.py from the cumulative counting project with adding the new passthrough. The code is working perfectly when the image is selected from a camera node and the model I am using is working. The problem occurs only with the video, if I add the second passthrough, the detection works fine but the tracking is not working well.
          What modifications should I opt on the original main.py since everything works well with the camera capture ?

            Hi martin181818
            It's very difficult to say without a MRE. I tested the experiment and it works fine on both capture and video. If you can confirm, that the stock model from the experiment works on both camera, as well as video feed, then there must be something wrong with the model. There is no reason why it wouldn't work besides maybe bandwidth issues which are very unlikely.

            Also make sure you are correctly parsing the video file so it works with NN node - mainly:

            def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
                return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

            Hope this helps,
            Jaka

              jakaskerl erik

              Thank you very much. I am using the to_planar function already.
              At the moment I am trying to implement the same code without any modification. The model works fine when I use the camera so there is nothing wrong with the model.
              Thank you for the SDK functionality, but is there any solution to use the XlinkIn ir passthrough to solve the issue in the main code ?

              thank you very much for your support

              • erik replied to this.