Hello, I was recently testing latency of different models for my project using Luxonis cameras. I was wondering if there was potentially a way to have the model run its predictions every user-defined n-frames, instead of the model predict on every frame and introducing larger latencies into the system. Is there something I can do in the pipeline for this to happen? Thanks.
Predictions every n-frames
Just to confirm something tangentially related, if one does not use .get() or tryGet() for their model in the pipeline, the model still predicts regardless, right?
Hi @stevex0
If you pass a frame to the NN node, that frame will be processed by the model. This is isolated from the .get() call which is done on host. But if you don't run .get() while the NN is running inference and sending output to the host side, the processed outputs will pile up and the pipeline will crash.
Thanks,
Jaka