Hello,
I have a pipeline that is freezing in a few runs but it working fine in other runs. I was trying to debug it but couldn't pin point the cause.
I have a generalized pipeline that is capable of running on RVC2 and RVC4 devices. All nodes in the pipeline can be configured to not be created based on the use case. But there is a sync node, which will always be created because the app is designed to process MessageGroup outputs. So, when I run the pipeline on OAK-1-MAX, the sync node will have only one input. I know this is redundant and adds overhead. But I'm exploring ways to fix the issue without having to change the structure of the pipeline.
When using OAK-1-MAX, I need to utilize the most I can from the IMX582 sensor. When I use the desired CameraControl settings, the pipeline is freezing in a few cases. In the MRE below, the pipeline freezes after processing 7-8 frames. But it runs perfectly fine in many cases. This might be hard to debug because I don't know how to reproduce it properly. I don't know if I'm on the right path to fix it, but this is what I tried:
- Configured cam node pool sizes
- Configured device SIPP buffer sizes
- Set the inputs to non-blocking (including XLinkOut)
- Enabled pipeline debugging to get the nodes' states. I couldn't share the log file here so I uploaded it to Drive. The pipeline state was logged with an interval of 5 seconds after the pipeline froze.
When I removed the sync node in the MRE, the pipeline never froze (I tested ~ 20 times). Also, In the actual pipeline in the app, I tried removing the sync node and creating a MessageGroup type output that contains ImgFrame from Camera node. This simulated the presence of a sync node. So the pipeline and app worked without issues. Is this the only way to resolve the issue? I appreciate any help and insights.
Thank you!
import depthai as dai
import datetime as dt
import time
import cv2
INPUT_NAME = "color_img"
DISPLAY = False
pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.Camera).build(boardSocket=dai.CameraBoardSocket.CAM_A)
cam.setIspNumFramesPool(3)
cam.setOutputsNumFramesPool(3)
cam.setMaxSizePoolIsp(200 * 1024 * 1024)
cam.setOutputsMaxSizePool(200 * 1024 * 1024)
cam.initialControl.setSharpness(0)
cam.initialControl.setLumaDenoise(1)
cam.initialControl.setChromaDenoise(1)
cam.initialControl.setBrightness(1)
cam.initialControl.setAutoFocusMode(dai.CameraControl.AutoFocusMode.OFF)
cam.initialControl.setManualFocus(160)
cam.initialControl.setAutoExposureEnable()
cam.initialControl.setAutoExposureLimit(5000)
cam.initialControl.setAutoExposureCompensation(2)
cam.initialControl.setAutoExposureRegion(2480, 1140, 2240, 4825)
cam.initialControl.setHdr(True)
cam.initialControl.setMisc("hdr-exposure-ratio", 4)
cam.initialControl.setMisc("hdr-local-tone-weight", 0.75)
cam.initialControl.setMisc("hdr-exposure-base", "long")
sync = pipeline.create(dai.node.Sync)
time_delta = dt.timedelta(milliseconds=10)
sync.setSyncThreshold(time_delta)
sync.setSyncAttempts(0)
sync.setRunOnHost(False)
sync.inputs[INPUT_NAME].setBlocking(False)
sync.inputs[INPUT_NAME].setMaxSize(8)
colorOut = cam.requestOutput(size=(5312, 6000), fps=10)
colorOut.link(sync.inputs[INPUT_NAME])
videoQueue = sync.out.createOutputQueue(maxSize=8, blocking=False)
pipeline.start()
try:
while pipeline.isRunning():
time.sleep(0.005)
msg_group = videoQueue.tryGet()
if msg_group is None:
continue
for name, payload in msg_group:
latencyMs = (dai.Clock.now() - payload.getTimestamp()).total_seconds() * 1000
print(f"Frame_id: [{payload.getSequenceNum()}] | Latency: [{latencyMs:.2f}] ms | Img_type: [{payload.getType()}]")
if DISPLAY is True and (name == INPUT_NAME):
cv2.namedWindow(INPUT_NAME, cv2.WINDOW_NORMAL)
cv2.resizeWindow(INPUT_NAME, 850, 900)
cv2.imshow(INPUT_NAME, payload.getCvFrame())
cv2.waitKey(1)
except KeyboardInterrupt:
print("Exiting...")
pipeline.stop()
except Exception as e:
print(str(e))
print("Exiting...")
pipeline.stop()