- Edited
Our AI model infers on OAK-D and outputs the desired result (60FPS).
But at present, it is found that the actual output result seems to be slower than the expected output result, as shown in the video.
The yellow dots in the video are OAK-D and the blue dots are the expected output.
Thats mean when OAK-D performs AI model inference, the output result is not the current state (latency).
nn.input.setBlocking(False)
nn.input.setQueueSize(1)
I have the settings according to the official recommendations, but still the same problem.
Hope someone can help me!!