• DepthAI-v2
  • Mixing Resolution / Sensor Model in VideoEncoder, leads to error / artefacts

I realise this might be a bug in depthai-core API. Not sure if it is fixable?

Basically, I encounter two scenarios when VideoEncoder is not working as expected:

  • First case, I create two VideoEncoder, one for 1200P and one for 800P. The 1200P image would have big green patch artefact

  • Second case, I create two VideoEncoder, one for 400P and one for 800P. This error shows up

  • [3.2] [3.552] [system] [critical] Fatal error. Please report to developers. Log: 'Fatal error on MSS CPU: trap: 07, address: 8007D9BC' '0'

    CTraceback (most recent call last):

I will put the re-production code below

#!/usr/bin/env python3

import depthai as dai
import cv2
import numpy as np

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and output
camD = pipeline.create(dai.node.MonoCamera)
camC = pipeline.create(dai.node.MonoCamera)
videoEncD = pipeline.create(dai.node.VideoEncoder)
videoEncC = pipeline.create(dai.node.VideoEncoder)
xoutC = pipeline.create(dai.node.XLinkOut)
xoutD = pipeline.create(dai.node.XLinkOut)

xoutC.setStreamName('mjpegc')
xoutD.setStreamName('mjpegd')

# Properties
camD.setBoardSocket(dai.CameraBoardSocket.CAM_D)
camD.setResolution(dai.MonoCameraProperties.SensorResolution.THE_1200_P)
videoEncD.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.MJPEG)

camC.setBoardSocket(dai.CameraBoardSocket.CAM_C)
camC.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
videoEncC.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.MJPEG)

# Linking
camD.out.link(videoEncD.input)
videoEncD.bitstream.link(xoutD.input)

camC.out.link(videoEncC.input)
videoEncC.bitstream.link(xoutC.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:

    # Output queue will be used to get the encoded data from the output defined above
    qd = device.getOutputQueue(name="mjpegd", maxSize=30, blocking=True)
    qc = device.getOutputQueue(name="mjpegc", maxSize=30, blocking=True)

    # The .mjpeg file is a raw stream file (not playable yet)
    with open('video.mjpeg', 'wb') as videoFile:
        print("Press Ctrl+C to stop encoding...")
        try:

            while True:
                mjpegPacketd = qd.get()  # Blocking call, will wait until a new data has arrived
                mjpegPacketc = qc.get()  # Blocking call, will wait until a new data has arrived

                mat_jpegc = np.frombuffer(mjpegPacketc.getData(), dtype=np.uint8)
                matc = cv2.imdecode(mat_jpegc, cv2.IMREAD_UNCHANGED)

                mat_jpegd = np.frombuffer(mjpegPacketd.getData(), dtype=np.uint8)
                matd = cv2.imdecode(mat_jpegd, cv2.IMREAD_UNCHANGED)

                cv2.imshow("camc", matc)
                cv2.imshow("camd", matd)
                cv2.waitKey(10)


                # mjpegPacket.getData().tofile(videoFile)  # Appends the packet data to the opened file
                # print(len(mjpegPacket.getData()))
        except KeyboardInterrupt:
            # Keyboard interrupt (Ctrl + C) detected
            pass

    print("To view the encoded data, convert the stream file (.mjpeg) into a video file (.mp4) using a command below:")
    print("ffmpeg -framerate 30 -i video.mjpeg -c copy video.mp4")

Above is for mixing 1200p and 800p, artefact on 1200p decoded stream.

Changing camD.setResolution(dai.MonoCameraProperties.SensorResolution.THE_1200_P) to 400p would yield error.

Hi @Huimin
Thanks for reporting, will try to repro it tomorrow at the office.

Hi @Huimin
I think this is not an issue of VideoEncoder, but of mono cameras since they are connected through FSYNC. The right camera drives the left, so setting higher resolution on master camera will result in weird flickering. Setting the slave camera to higher resolution works fine. This results in the videoencoder breaking.

I have reported this to the devs.

Thanks,
Jaka

@jakaskerl Thanks for your investigation. I think it might not be the root cause on my side though.

Elaborate a bit more on my setup. I am using a FFC-like hardware, with camb, camc, camd running independently, on separate I2C lines. I did not turn on triggering in the python code. So the cameras are running pretty freely.

Would it be possible for you to test on a FFC hardware? I deduce the cause of error is still setting multiple cameras to use VideoEncoder at the different SensorResolution

    Hi Huimin
    Cam_b and cam_c on FFC boards are linked together by default. This means they will be set to the same 3A settings as well as the same FSYNC signal:

    On OAK-FFC-4P, we have 4 camera ports; A (rgb), B (left), C (right), and D (cam_d). A & D are 4-lane MIPI, and B & C are 2-lane MIPI. Each pair (A&D and B&C) share an I2C bus, and the B&C bus is configured for HW syncing left+right cameras by default. (https://docs.luxonis.com/projects/hardware/en/latest/pages/guides/sync_frames/#oak-ffc-hardware-syncing)

    Thanks,
    Jaka

      jakaskerl Ah sorry for the confusion. Yes we are not using the exact FFC. We have our custom hardware that has four independent I2C. The firmware is also custom to control all four cams independently. Maybe it is something better to check internally in Slack.