• DepthAI
  • Impact of Latency on Practical Applications

Our AI model infers on OAK-D and outputs the desired result (60FPS).
But at present, it is found that the actual output result seems to be slower than the expected output result, as shown in the video.
The yellow dots in the video are OAK-D and the blue dots are the expected output.

Thats mean when OAK-D performs AI model inference, the output result is not the current state (latency).

nn.input.setBlocking(False)
nn.input.setQueueSize(1)

I have the settings according to the official recommendations, but still the same problem.

Hope someone can help me!!

  • erik replied to this.

    Hi YaoHui ,
    NN results will always arrive to the host after the frame itself, so it's expected that there will be some lag between the two (above 100ms). To overcome this delay, I would suggest syncing frames and NN results. Thoughts?
    Thanks, Erik

      Hi erik ,
      I don't need to output the frame to the host, just the X,Y,Z coordinates.
      Even so, do I still need to sync frames and NN results?

      Blue dot: Other Webcams
      Yellow dot: OAK-D
      As shown in the video, the yellow dot is always a small distance behind the blue dot...

      • erik replied to this.

        YaoHui from my understanding - blue dot is obtained by getting frames from webcam and performing CV operations (eg. using opencv), whereas OAK-D was using NN and object detection to find the circle - is that the case? If so, that's to be expected, as NN infernecing takes about 100-200ms, whereas traditional CV (which works perfectly for such trivial "find circle" tasks) take a few ms. Or are you performing CV and just getting frames from OAK? If so, see latency docs here.
        Thanks, Erik

          HI erik ,

          The blue dots use the NN model and are inference by the host (images are taken from other webcams), and the yellow dots are the X, Y, and Z world coordinates directly output by OAK-D (images are not output).

          The NN inference time for the blue dot is greater than 16.6ms.
          The yellow dots are optimized to make NN inference less than 14ms.
          (Verified inference time with OpenVino)

          In theory it should be about the same, but from the results, the yellow dot seems to be more than 300 ms behind.
          From the literature, it's probably only about 105ms behind.

          Is there a workaround for this part?

          • erik replied to this.

            Hi YaoHui ,

            and the yellow dots are the X, Y, and Z world coordinates directly output by OAK-D (images are not output).

            How are you plotting XYZ (in meters) to XY (in pixels) image plane? To measure the latency between frame and NN result or between image and NN result arriving to the host computer you can compare these by message.getTimestamp().
            Thanks, Erik

              Hi erik,

              How are you plotting XYZ (in meters) to XY (in pixels) image plane?

              In practice we need world coordinates. In order to compare the accuracy, we will convert the world coordinates to image coordinates on the host side, and compare the difference between the two.

              Currently tested, there will be latency of 170ms.
              Is there any way to reduce latency?

              • erik replied to this.

                Hi YaoHui , Is that the latency of the NN inferencing?

                  Hi erik ,
                  I don't think so, because I have measured the execution time of NN inference (14ms).

                  • erik replied to this.

                    YaoHui how did you measure that latency? Also which latency does the 170ms correspond to?

                      Hi erik ,
                      170ms is the time interval between the yellow and blue dots.
                      Latency is estimated by displaying points on the image and counting the number of frames in between.

                      • erik replied to this.

                        Hi YaoHui ,

                        Latency is estimated by displaying points on the image and counting the number of frames in between.

                        How did you come to exactly 14ms? Did you have camera fps to 71 (1sec/71fps=14ms), and detections were exactly 1 frame behind the original frame?
                        If not for NN, what would be the reason for the latency?

                          Hi erik ,

                          How did you come to exactly 14ms? Did you have camera fps to 71 (1sec/71fps=14ms), and detections were exactly 1 frame behind the original frame?
                          If not for NN, what would be the reason for the latency?

                          I use openvino benchmark.exe to verify the inference time of NN.
                          In fact, the inference time of the NN is 12.38ms.
                          I set the RGB, Stereo camera to 60FPS, so OAK-D should output results every 16m (it does).
                          But the current problem is that the output results have a Latency of 170ms.

                          • erik replied to this.

                            Hi YaoHui , Inference time of NN is not the same as the latency from inferencing to the host. And openvino benchmark also isn't the same as running a model on the OAK cam with depthai. And 170ms seems about what I would expect from object detection models.

                              Hi erik ,
                              The conclusion is that the 170ms latency occurs because the NN is inferring on the OAK cam with depthai, which is the overall program latency.
                              Do I understand this right?

                                Hi YaoHui ,
                                Another question; was the latency (12ms) measured on the actual camera (so VPU Movidius MyriadX), or cpu/gpu? Could you share the results? DepthAi and transfering through USB definately adds some latency (some docs here), but likely below 10ms. So my main guess would be the inference wasn't done on the actual OAK camera.

                                  Hi erik ,

                                  Inference on VPU Movidius MyriadX.
                                  Below is the result obtained by Benchmark_app.exe.

                                  Loading the model to the device
                                  Load network took 1771.79 ms
                                  Setting optimal runtime parameters
                                  Device: MYRIAD
                                  { NETWORK_NAME , torch-jit-export }
                                  { OPTIMAL_NUMBER_OF_INFER_REQUESTS , 4 }
                                  { DEVICE_THERMAL , 34.6395 }
                                  Creating infer requests and preparing input blobs with data
                                  No input files were given: all inputs will be filled with random values!
                                  Test Config 0
                                  image  ([N,C,H,W], u8, {1, 3, 128, 128}, static):      random (image is expected)
                                  Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
                                  BENCHMARK IS IN INFERENCE ONLY MODE.
                                  Input blobs will be filled once before performance measurements.
                                  First inference took 12.21 ms

                                  I have referenced the relevant literature (some docs here), since I did not export the image to my computer, should the file be of no reference value to me?
                                  To verify whether the NN runs on the OAK cam, we conduct relevant tests in this part.
                                  For example: Execute the program on the Raspberry Pi or NB, and the result is that the data can be output every 16ms.

                                  10 days later

                                  Hi erik ,

                                  I can think of the conclusion as the overall operation process of OAK-D, and the overall latency will be determined by the use of DepthAI functions (Script, ImageManip, SpatialLocationCalculatorAlgorithm).

                                  • erik replied to this.

                                    Hi YaoHui ,
                                    I haven't been able to repro this issue - because OpenVINO is PITA to work with - and I haven't come to any conclusion. But your depthi tests seems what I would expect as well, and results from openvino are a bit far-fetched, at least for send-to-receive result (so time since when you sent an image to the device to the time when NN results are returned to the host machine).
                                    Thanks, Erik

                                      Hi erik ,

                                      Since we need to calculate the world coordinates, in order to make sure that performing depth estimation in OAK-D will not affect the overall speed.
                                      We simulated fixing the depth distance (Z=600mm) and calculating the world coordinates on the host side (without using a Stereo lens) .
                                      From this experiment, we found that when the depth estimation on OAK-D is cancelled, the overall delay can be effectively reduced.

                                      • erik replied to this.