We measured the processing latency on device side with https://github.com/luxonis/depthai-python/blob/main/examples/04_rgb_encoding.py
- default 4K resolution, FPS lowered to 29: cam.setFps(29) , because VENC takes longer than frame-time to process, and that caused a build-up of latency up to 8x frame-time. We'll need to improve this, both VENC processing (if possible), and properly skipping source frames.
0. MIPI readout time (2lane): 23.8ms
1. ISP: 14.3ms
2. PPENC: 10.0ms
3. VidENC: 34.2ms
=== total latency after readout completed (1+2+3): 58.5ms
- resolution changed to 1080p, 30fps:
With a bitrate of around 20Mbps, sending over Ethernet shouldn't take too much time.
0. MIPI readout time (2lane): 8.5ms
1. ISP: 3.8ms
2. PPENC: 4.7ms
3. VidENC: 8.8ms
=== total latency after readout completed (1+2+3): 17.3ms
I think we should be able to get the Ethernet incurred latency to less than 100ms with normal MTU, and less than 40ms with Jumbo 9k.
Then it would depend how much the decoding takes on the host (HW accelerators, etc).
So the takeaway for anyone who is wanting to lower latency on 4K video encoding and then decoding, for now (until we improve this) reduce the framerate on the 4K video stream from 30FPS to 29FPS, and this results in a 8x reduction in latency!