Hi!
I was curious to learn some best practise on how to set up a streaming pipeline with 30FPS Fullhd image processing. In particular, I want to run a neural network on a powerful hardware (4090GPU, i9 processor,…).
I need the video stream and corresponding IMU data. For that, I started working with that example:
https://github.com/luxonis/depthai-experiments/blob/master/gen2-syncing/imu-frames-timestamp-sync.py
However, when introducing several image manipulation operations on the host side, the FPS goes down dramatically (like 5fps only). When doing the exact same logic on a ZED2 camera, I don't have any problems maintaining 30fps.
I wonder how I'd need to set up the oak to also provide that. Do I need to include threading or such?
I'd be thankful for any advice!
Thanks!