Hi Thor
The setNumInferenceThreads
parameter in the NeuralNetwork node determines the number of threads that the node should use to run the network. The optimal value for this parameter can depend on the specific requirements (shaves you compiled your model for).
The parameter can take values of 0, 1, or 2. A value of 0 means that the number of threads is set to AUTO, which allows the system to automatically determine the optimal number of threads.
In your case, you mentioned that you are not seeing a significant difference in frames per second (FPS) when changing the value of setNumInferenceThreads. This could be due to a number of factors. For example, the performance of the neural network inference may be limited by other factors such as the complexity of the model, the input data size, or the hardware capabilities.
https://discuss.luxonis.com/d/2354-using-setispscale-results-in-number-of-shaves-warning/2
It's also worth noting that the Yolo8n model you're using might not be able to achieve a higher FPS due to its complexity. You could try using a simpler model or reducing the frame rate of the cameras running the inference to match the speed of the neural inference itself.
Another approach to improve the FPS is to use short-term tracking, which allows to track objects between frames, thereby reducing the need to run object detection on each frame. This works great with NN models that can’t achieve 30FPS.
Thanks,
Jaka