Hello, I was recently testing latency of different models for my project using Luxonis cameras. I was wondering if there was potentially a way to have the model run its predictions every user-defined n-frames, instead of the model predict on every frame and introducing larger latencies into the system. Is there something I can do in the pipeline for this to happen? Thanks.

Hi @stevex0
Not sure if this would decrease the latency, but you can use a script node to only send a frame for inference every n frames, otherwise just send the frame out or discard it.

Here is a similar example.

Thanks,
Jaka

Just to confirm something tangentially related, if one does not use .get() or tryGet() for their model in the pipeline, the model still predicts regardless, right?

Hi @stevex0
If you pass a frame to the NN node, that frame will be processed by the model. This is isolated from the .get() call which is done on host. But if you don't run .get() while the NN is running inference and sending output to the host side, the processed outputs will pile up and the pipeline will crash.

Thanks,
Jaka