Hello, I have a question about enhancing the FPS of YOLOv7 which works but the fps is so low, 3 fps. I am using an OAK-D W PoE camera with depthai 2.21.2.0. I have created the following pipeline:

which is the same as this code example .

I trained YOLOv7 on a custom dataset, then converted it to a .blob file using https://tools.luxonis.com/ and selected 5 shaves. I set the color camera resolution to 1080 and also adjusted the FPS. I tried setting the FPS and the Isp3aFps to the same value to prevent 3A from running on each frame. I'm not sure if this is the correct approach or if I need to set the monoCamera FPS to the same value as well.
I have enabled the INFO log level, and the CMX memory usage is high. However, it didn't print the usage of the shaves and CMX slices as shown in the documentation :

    Based on your suggestion, I added a VideoEncoder node, resulting in an increase in the fps to 9(but still too low ! ) Moreover, it led to a reduction in the CPU usage of leonOS to 77%. However, a new issue has emerged concerning the output bounding boxes of Yolo. They are now inaccurate and seem misaligned with the detected objects. This discrepancy arises because the input image for Yolo originates from the ColorCamera, while the output image comes from the VideoEncoder.

    Furthermore, during my troubleshooting process, I observed that the PoE exhibited a speed of 1000 Mbps, with a downlink of 181.9 Mbps and an uplink of 153.9 Mbps. I believe this information could be relevant in comprehending the overall system performance.

    Now, addressing your questions:

    1. How can I input the image from VideoEncoder to the Image input of Yolo? I attempted this approach, but the image sizes did not match, even after employing the ImgManip node for resizing.

    2. How can I further increase the fps?

      9 days later

      hello @erik ,

      based on your suggestions , this is an update :

      1. I have successfully obtained the input image for the YOLO node from the EncoderVideo module using the "stretch" technique for inferencing.

      2. I have not encountered any bandwidth limitations

      but the fps still 11

      On another note, I attempted to add a script node to send a SpatialLocationCalculatorConfig message to the SpatialLocationCalculator node for configuration purposes. However, I faced a prb during the module import within the setScript section.

      so how can i integrate the required modules ? and is it is possible to establish an SSH connection to the camera?

      • erik replied to this.

        Hi souhamseibi ,
        You can't run custom libraries in script node, more info here:

        You also can't SSH into RVC2-based camera, as it's an embedded system (RVC3 / beyond does have linux and you can ssh into those cameras).

        Regarding FPS limit, please submit a MRE: https://docs.luxonis.com/en/latest/pages/support/#depthai-issue

        Thanks, Erik

        12 days later

        Hi @erik ,
        Thank you very much for your feedback. I have made continuous efforts to resolve the FPS limitation but unfortunately have been unable to identify the root cause of the problem. In order to assist you in resolving the issue, I have provided an MRE. I hope this helps in finding a solution.

          souhamseibi I have refactored the demo so it uses SDK, PR here. Subpixel is a big problem, as it requires quite some resources, especially the 3 CMX slices (StereoDepth will use 1 without subpixel), which means that if you compile NNs for 6SHAVEs, only 1 thread will be able to run at the same time. So you'd need to compile the NN for 5 SHAVEs. More info on HW resource shere: https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#hardware-blocks-and-accelerators

          4 months later

          souhamseibi

          Hi,

          I am wondering …can you help me how you have trained your data set by OAK?

          I faced some errors and I am new in machine learning…

          Thanks…