Hello,
I am trying to perform a 180 degrees rotation on RGB frames that are 1920x1080 from an OAK-D PoE camera using the ImageManip node. However, this is introducing a very big amount of delay. Originally, the normal 1080p feed was resulting in 250ms of delay, but after performing the rotation, I am measuring approximately 2 seconds of delay time.
Any advice on what can be done to perform rotation while still maintaining as low of a delay as possible?
PS: I am looking into using VideoEncoder to potentially perform the rotation and then encode the data, which I would then send to the output preview and decode there, but I am not sure if this approach is feasible given some compatibility issues I am running into.
The following is the script I am testing with:
#!/usr/bin/env python3
import cv2
import depthai as dai
from datetime import datetime
# Create pipeline
pipeline = dai.Pipeline()
# Define sources and inputs
camRgb = pipeline.create(dai.node.ColorCamera)
videoEncoder = pipeline.create(dai.node.VideoEncoder)
manipRgb = pipeline.create(dai.node.ImageManip)
# Define outputs and assign stream names to them
videoMjpegOut = pipeline.create(dai.node.XLinkOut)
previewOut = pipeline.create(dai.node.XLinkOut)
videoMjpegOut.setStreamName("video")
previewOut.setStreamName("preview")
# Properties
camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setVideoSize(1920, 1080)
videoEncoder.setDefaultProfilePreset(camRgb.getFps(), dai.VideoEncoderProperties.Profile.H265_MAIN)
videoMjpegOut.input.setBlocking(False)
videoMjpegOut.input.setQueueSize(1)
rgbRr = dai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = 1920 // 2, 1080 // 2
rgbRr.size.width, rgbRr.size.height = 1920, 1080
rgbRr.angle = 180
manipRgb.initialConfig.setCropRotatedRect(rgbRr, False)
manipRgb.setMaxOutputFrameSize(6220800)
# Linking
# Link output 'preview' frames (RGB) of ColorCamera to input of ImageManip
# ImageManip does not support NV12 yet.
camRgb.preview.link(manipRgb.inputImage)
# Link output of ImageManip to input of VideoEncoder
manipRgb.out.link(videoEncoder.input)
#camRgb.video.link(videoEncoder.input)
camRgb.preview.link(previewOut.input)
videoEncoder.bitstream.link(videoMjpegOut.input)
# Connect to device and start pipeline
with dai.Device(pipeline) as device:
#video = device.getOutputQueue(name="video", maxSize=1, blocking=False)
previewQueue = device.getOutputQueue('preview')
videoQueue = device.getOutputQueue('video')
while True:
previewFrames = previewQueue.tryGetAll()
for previewFrame in previewFrames:
cv2.imshow('preview', previewFrame.getData().reshape(previewFrame.getHeight(), previewFrame.getWidth(), 3))
videoFrames = videoQueue.tryGetAll()
for videoFrame in videoFrames:
frame = cv2.imdecode(videoFrame.getData(), cv2.IMREAD_UNCHANGED)
cv2.imshow('video', frame)
if cv2.waitKey(1) == ord('q'):
break
I am getting this output:
[356.613] [ImageManip(2)] [error] Processing failed, potentially unsupported config
[356.613] [VideoEncoder(1)] [warning] Arrived frame type (10) is not either NV12 or YUV400p (8-bit Gray)
I would appreciate some help with this.
Thanks!