Hi everyone!
I'm trying to change my pipeline ColorCamera resolution from 1080P to 4K to have bigger images. Also, I also need to keep the full FOV of the sensor, so I opted in using an ImageManip node with a setResizeThumbnail. While this works properly with 1080P images, with 4K images the code crashes with these logs:
[system] [critical] Fatal error. Please report to developers. Log: 'ResourceLocker' '358'
[host] [warning] Device crashed, but no crash dump could be extracted.
I've noticed that everything works if I resize the image into a bigger one, for instance a resize.initialConfig.setResizeThumbnail(800, 800, 0, 0, 0)
does not give any problem (but the size is then too big for my NN).
Can you provide any help please?
Here is the MRE using an Openvino neural network. I'm using Depthai 2.25.0 and an OAK-1 device.
import os
import depthai as dai
import blobconverter
import cv2
OUTPUT_SIZE = 384
pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
cam.setInterleaved(False)
cam.setPreviewSize(3840,2160) #4K
resize = pipeline.create(dai.node.ImageManip)
resize.initialConfig.setResizeThumbnail(OUTPUT_SIZE, OUTPUT_SIZE, 0, 0, 0) #<- This one causes crash, if OUTPUT_SIZE = 800 it works (but not size for NN)
resize.initialConfig.setKeepAspectRatio(False)
resize.setMaxOutputFrameSize(OUTPUT_SIZE * OUTPUT_SIZE * 3)
cam.preview.link(resize.inputImage)
x_out = pipeline.create(dai.node.XLinkOut)
x_out.setStreamName("image")
resize.out.link(x_out.input)
detection_nn = pipeline.create(dai.node.MobileNetDetectionNetwork)
detection_nn.setConfidenceThreshold(0.7)
detection_nn.setBlobPath(blobconverter.from_zoo(name="person-detection-0201"))
resize.out.link(detection_nn.input)
x_out1 = pipeline.create(dai.node.XLinkOut)
x_out1.setStreamName("det")
detection_nn.out.link(x_out1.input)
with dai.Device(pipeline) as device:
qImg = device.getOutputQueue(name="image", maxSize=3, blocking=False)
yoloQ = device.getOutputQueue(name="det", maxSize=3, blocking=False)
cv2.namedWindow("Image", cv2.WINDOW_NORMAL)
while True:
img = qImg.get()
gs = img.getCvFrame()
dets = yoloQ.get().detections
cv2.imshow("Image", gs)
if cv2.waitKey(1) == ord('q'): # Exit when 'q' is pressed
break
Thank you for your help!
Best regards
Simone