jakaskerl
Thank you for your suggestion about using FFMPEG for lossless recording! I really appreciate your response.
I'd like to follow up with a more specific question about my project.
I'm setting up 20 OAK 4S cameras for parallel streaming and recording. My current pipeline uses NV12 format:
def build_pipeline(p: dai.Pipeline):
cam = p.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
out = cam.requestOutput((VIDEO_W, VIDEO_H), dai.ImgFrame.Type.NV12, fps=FPS)
q = out.createOutputQueue(maxSize=6, blocking=True)
return q
def main():
info = dai.DeviceInfo(IP_ADDR)
with dai.Pipeline(dai.Device(info)) as p:
video_q = build_pipeline(p)
p.start()
Frame Format: I need raw frames for lossless recording, but for streaming I convert to MJPEG. Should I stick with NV12 or use a different format(RGB888i,BGR888i,RAW8) for better performance with 20 cameras?
Performance: With 20 parallel streams, how can I prevent frame drops? Any specific optimizations for handling this scale?
Setup Advice: Any suggestions for my architecture? Cameras are only for video capture (no onboard ML), with FastAPI backend handling streaming and recording.
Thanks for your help!