Hey guys,
I've got OAK-1-Lite. Normally it can run "small" models like YOLO5n or YOLO8n on images that went through resizing to 416x416 or 640x640.
Question: Can I just make the code infer only every once in 60 seconds, so this way I could use larger models that take more inference time?
Moreoever, can I use Asynchoronus code? (*async
)
*
Could I pass the inference to the VPU, using I/O logic of async, while the CPU continues to capture the frames and display them, until the coroutine of the inference function finishes?
thanks!