Hello,

def create_pipeline(window="video", fps = 30):
    print(window)
    dict = {'1080': dai.ColorCameraProperties.SensorResolution.THE_1080_P, '12mp': dai.ColorCameraProperties.SensorResolution.THE_12_MP, '4k': dai.ColorCameraProperties.SensorResolution.THE_4_K}
    pipeline = dai.Pipeline()

    pipeline.setXLinkChunkSize(0)           # might decrease the latency
    camRgb = pipeline.create(dai.node.ColorCamera)
    camRgb.setInterleaved(False)            # True: BGR, False: RGB
    camRgb.setFps(fps)
    camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

    rgbOut = pipeline.create(dai.node.XLinkOut) # LinkOut node is used to send data from the device to the host via XLink
    rgbOut.setStreamName("rgb")
    rgbOut.input.setBlocking(False)
    rgbOut.input.setQueueSize(1)    
       
    sensorRes = '4k'
    args.res = sensorRes
    camRgb.setResolution(dict[sensorRes])    # Internal Downscaling. It reduces the image res. while keeping aspect ratio and FOV.
    
    return pipeline

args.height = camRgb.getVideoHeight()

I set the camera sensor resolution to '4k' here. So the frame height should be 2160 pixels. However, in my rgb message, the obtained frame rate is 1080:

    pipeline = create_pipeline(window='video', fps=FPS)      
    # Connect to device and start pipeline
    with dai.Device() as device:     #  DepthAI’s API, however, is synchronous. This means that the API calls will block the main thread until the operation is completed. \
        device.startPipeline(pipeline)
        msg = device.getOutputQueue('rgb', maxSize=4, blocking=False).tryGet()
            if msg is not None:
                # This timestamp represents the time when the frame was captured by the camera sensor, since it is rolling shutter, middle exposure time is obtained.
                ts = msg.getTimestamp(dai.CameraExposureOffset.MIDDLE).total_seconds()
                frameRgb = msg.getCvFrame()
                cv2.imshow("video", frameRgb)
                frame_height = frameRgb.shape[0]
                if cv2.waitKey(1) == ord('q'):
                    break
            else:
                logging.debug(f"[LOOP] [RGB] FRAME MSG is: {msg}. NO RGB frame detected yet or another thread is using the data.")

Then, this printing:

    print(f"[MATH] FRAME HEIGHT FROM MSG OF CAMERA: {frame_height} | DEFINED FRAME HEIGHT: {args.height} -> SHOULD BE THE SAME")

gives me:
[MATH] FRAME HEIGHT FROM MSG OF CAMERA: 1080 | DEFINED FRAME HEIGHT: 2160 -> SHOULD BE THE SAME

What might the problem here?

Thanks
Uce

    Uce
    You don't link the video output so the .getVideoHeight() doesn't work.

    Also; rename the dictionary, dict is a reserved keyword in python.

    Thanks,
    Jaka

    • Uce likes this.