• help with frame output when using the h264 or h265 codec

I need to output the h264 or h265 video stream from the camera using cv2 or PIL. this is necessary to transfer a video stream from one computer to another. the problem is that I do not know how to output frames using one of these codecs.
that's how I get to output with the MJPEG codec, but when I use h265\h 264, cv2.imdecode(frame, v2.IMREAD_COLOR) is None:

import cv2
import depthai as dai

pipeline = dai.Pipeline()

camRgb = pipeline.create(dai.node.ColorCamera)
xoutVideo = pipeline.create(dai.node.XLinkOut)

xoutVideo.setStreamName("video")

camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setVideoSize(1920, 1080)
camRgb.setFps(60)
videoEnc = pipeline.create(dai.node.VideoEncoder)
videoEnc.setDefaultProfilePreset(60, dai.VideoEncoderProperties.Profile.MJPEG)

camRgb.video.link(videoEnc.input)
videoEnc.bitstream.link(xoutVideo.input)

with dai.Device(pipeline) as device:

    video = device.getOutputQueue(name="video", maxSize=1, blocking=False)

    while True:
        videoIn = video.get()

        frame = videoIn.getData()

        frame = cv2.imdecode(frame, cv2.IMREAD_COLOR)
        cv2.imshow("video", frame)

        if cv2.waitKey(1) == ord('q'):
            break

    Hi houzd
    The issue is that when you use the H.264 or H.265 codec, the bitstream data is different from that of MJPEG. The MJPEG codec compresses each frame separately as JPEG, so you can simply decode each frame with cv2.imdecode(). However, H.264 and H.265 are inter-frame codecs, meaning they compress a sequence of video frames taking into account the differences between successive frames.

    I'm not sure why you are decoding the frames on host side if you wish to have send data to another computer.
    I would recommend using something like: https://github.com/luxonis/depthai-experiments/tree/master/gen2-mjpeg-streaming

    Is there any specific reason you need H26x over mjpeg?
    Here is a GPT aided code for decoding H26x:

    import cv2
    import depthai as dai
    import tempfile
    
    pipeline = dai.Pipeline()
    
    camRgb = pipeline.create(dai.node.ColorCamera)
    xoutVideo = pipeline.create(dai.node.XLinkOut)
    
    xoutVideo.setStreamName("video")
    
    camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
    camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    camRgb.setVideoSize(1920, 1080)
    camRgb.setFps(60)
    videoEnc = pipeline.create(dai.node.VideoEncoder)
    videoEnc.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.H265)  # or Profile.H264
    
    camRgb.video.link(videoEnc.input)
    videoEnc.bitstream.link(xoutVideo.input)
    
    # Create a temporary file to write video data
    temp_file = tempfile.NamedTemporaryFile(suffix='.h265').name  # or '.h264' if you're using H.264
    
    with dai.Device(pipeline) as device:
    
        video = device.getOutputQueue(name="video", maxSize=1, blocking=False)
    
        # Write the bitstream to the temporary file
        with open(temp_file, 'wb') as f:
            while True:
                videoIn = video.get()
                frame = videoIn.getData()
                f.write(frame.tobytes())
    
                # This is just a simple way to exit, you might want to find a more elegant way to do this
                if cv2.waitKey(1) == ord('q'):
                    break
    
    # Now that the video data is written, read and display the video
    cap = cv2.VideoCapture(temp_file)
    
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        cv2.imshow('video', frame)
        if cv2.waitKey(1) == ord('q'):
            break
    
    cap.release()
    cv2.destroyAllWindows()

    I haven't tested it but I believe it should work.

    Thanks,
    Jaka