• DepthAI
  • Fatal error. Please report to developers. Log: 'ImageManipHelper' '61'

I am trying to test OAK-D-Lite pipelines by sending recorded .mp4 file back through the device. I am basically following the example from the Video & MobilenetSSD example code. I started out simplifying the code to input the .mp4 file to a xLinkIn node and send the frames back out to a xLinkOut node and viewed the output with cv2.imshow() and all was fine.

However, when I added an ImageManip node in between the xLinkIn and xLinkOut nodes, I got the fatal error. I have used the ImageManip node with the same parameters in another pipeline and it worked fine. Did I do something wrong or did I come across a bug of some sort?

The following is the code I am running

import depthai as dai
import numpy as np
from time import monotonic

# Define Frame
FRAME_SIZE = (640, 400)
DET_INPUT_SIZE = (300,300)

# Define input file and capture source
fileName = "Test_Videos/test640x400.mp4"

# Start defining a pipeline
pipeline = dai.Pipeline()

# Define an input stream node
xinFrame_in = pipeline.createXLinkIn()
xinFrame_in.setStreamName("inFrame")

# Create ImageManip node
manip = pipeline.createImageManip()                                  # create the imageManip node
manip.initialConfig.setResize(DET_INPUT_SIZE[0], DET_INPUT_SIZE[1])  # scale image to detection NN need
manip.initialConfig.setKeepAspectRatio(False)

# Create a output stream node
x_manip_out = pipeline.createXLinkOut()
x_manip_out.setStreamName("outFrame")

# Link input stream to manip to output stream
xinFrame_in.out.link(manip.inputImage)
manip.out.link(x_manip_out.input)

# Start pipeline
with dai.Device(pipeline) as device:

    # Input queue will be used to send video frames from the file to the device.
    q_inFrame = device.getInputQueue(name="inFrame")

    # Output queue to be used to view what is sent to the nn.
    q_outFrame = device.getOutputQueue(name="outFrame", maxSize=1, blocking=False)

    frame = None

    def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
        return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

    cap = cv2.VideoCapture(fileName)

    while cap.isOpened():

        # Get frame from file and send to xLink input
        ret, frame = cap.read()
        img = dai.ImgFrame()
        img.setData(to_planar(frame, (FRAME_SIZE[0], FRAME_SIZE[1])))
        img.setTimestamp(monotonic())
        img.setWidth(FRAME_SIZE[0])
        img.setHeight(FRAME_SIZE[1])
        q_inFrame.send(img)

        out_manip = q_outFrame.get()
        manip_frame = out_manip.getCvFrame()

        # Capture the key pressed
        key_pressed = cv2.waitKey(1) & 0xff

        # Stop the program if Esc key was pressed
        if key_pressed == 27:
            break

        # Display the video input frame and the manip output
        cv2.imshow("Direct video from file", frame)
        cv2.imshow("manip output", manip_frame)

cap.release()
cv2.destroyAllWindows()
`
The following is the error I got:
`francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
[18443010F1AECE1200] [1038.519] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '61'
Stack trace (most recent call last):
#16   Object "/bin/python", at 0x587533, in 
#15   Object "/lib/aarch64-linux-gnu/libc.so.6", at 0x7f9c6d9217, in __libc_start_main
#14   Object "/bin/python", at 0x587637, in Py_BytesMain
#13   Object "/bin/python", at 0x5b7afb, in Py_RunMain
#12   Object "/bin/python", at 0x5c7c37, in PyRun_SimpleFileExFlags
#11   Object "/bin/python", at 0x5c8457, in 
#10   Object "/bin/python", at 0x5c251f, in 
#9    Object "/bin/python", at 0x5c850b, in 
#8    Object "/bin/python", at 0x5976fb, in PyEval_EvalCode
#7    Object "/bin/python", at 0x49628f, in _PyEval_EvalCodeWithName
#6    Object "/bin/python", at 0x4964f7, in 
#5    Object "/bin/python", at 0x49c257, in _PyEval_EvalFrameDefault
#4    Object "/bin/python", at 0x4c6cc7, in 
#3    Object "/bin/python", at 0x4a52ff, in _PyObject_MakeTpCall
#2    Object "/bin/python", at 0x4cac53, in 
#1    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f89fd8037, in 
#0    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f8a09cd00, in 
Segmentation fault (Address not mapped to object [0x219])
Segmentation fault
`

Sorry for the ugly format. Here is the code again:

import cv2
import depthai as dai
import numpy as np
from time import monotonic

# Define Frame
FRAME_SIZE = (640, 400)
DET_INPUT_SIZE = (300,300)

# Define input file and capture source
fileName = "Test_Videos/test640x400.mp4"

# Start defining a pipeline
pipeline = dai.Pipeline()

# Define an input stream node
xinFrame_in = pipeline.createXLinkIn()
xinFrame_in.setStreamName("inFrame")

# Create ImageManip node
manip = pipeline.createImageManip()                                  # create the imageManip node
manip.initialConfig.setResize(DET_INPUT_SIZE[0], DET_INPUT_SIZE[1])  # scale image to detection NN need
manip.initialConfig.setKeepAspectRatio(False)

# Create a output stream node
x_manip_out = pipeline.createXLinkOut()
x_manip_out.setStreamName("outFrame")

# Link input stream to manip to output stream
xinFrame_in.out.link(manip.inputImage)
manip.out.link(x_manip_out.input)

# Start pipeline
with dai.Device(pipeline) as device:

    # Input queue will be used to send video frames from the file to the device.
    q_inFrame = device.getInputQueue(name="inFrame")

    # Output queue to be used to view what is sent to the nn.
    q_outFrame = device.getOutputQueue(name="outFrame", maxSize=1, blocking=False)

    frame = None

    def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
        return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

    cap = cv2.VideoCapture(fileName)

    while cap.isOpened():

        # Get frame from file and send to xLink input
        ret, frame = cap.read()
        img = dai.ImgFrame()
        img.setData(to_planar(frame, (FRAME_SIZE[0], FRAME_SIZE[1])))
        img.setTimestamp(monotonic())
        img.setWidth(FRAME_SIZE[0])
        img.setHeight(FRAME_SIZE[1])
        q_inFrame.send(img)

        out_manip = q_outFrame.get()
        manip_frame = out_manip.getCvFrame()

        # Capture the key pressed
        key_pressed = cv2.waitKey(1) & 0xff

        # Stop the program if Esc key was pressed
        if key_pressed == 27:
            break

        # Display the video input frame and the manip output
        cv2.imshow("Direct video from file", frame)
        cv2.imshow("manip output", manip_frame)

cap.release()
cv2.destroyAllWindows()

And here is the error I got:

francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
[18443010F1AECE1200] [247.678] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '61'
Traceback (most recent call last):
  File "/home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py", line 71, in <module>
    manip_frame = out_manip.getCvFrame()
AttributeError: 'NoneType' object has no attribute 'getCvFrame'
Stack trace (most recent call last):
#14   Object "/bin/python", at 0x587533, in 
#13   Object "/lib/aarch64-linux-gnu/libc.so.6", at 0x7f8bdb2217, in __libc_start_main
#12   Object "/bin/python", at 0x587637, in Py_BytesMain
#11   Object "/bin/python", at 0x5b79eb, in Py_RunMain
#10   Object "/bin/python", at 0x5c958f, in Py_FinalizeEx
#9    Object "/bin/python", at 0x5cdde3, in 
#8    Object "/bin/python", at 0x5ce40f, in _PyGC_CollectNoFail
#7    Object "/bin/python", at 0x485b1b, in 
#6    Object "/bin/python", at 0x5bdabf, in 
#5    Object "/bin/python", at 0x525723, in PyDict_Clear
#4    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f796b006f, in 
#3    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f7976fa77, in 
#2    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f798e6a03, in dai::DataOutputQueue::~DataOutputQueue()
#1    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f798e3c97, in 
#0    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f799e29a4, in 
Segmentation fault (Invalid permissions for mapped object [0x7f796c8b68])
Segmentation fault
  • erik replied to this.

    Hi FrancisTse ,
    What's the depthai version you are using?
    Thanks, Erik

    Hello Erik, I build my depthai in January of 2022 so it is probably rather old. How do I find out what version I have and how would I upgrade it to latest? I am running depthai on a Raspberry Pi 4 board with the Raspberry Pi OS:

    francis@raspberrypi:~ $ cat /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
    NAME="Debian GNU/Linux"
    VERSION_ID="11"
    VERSION="11 (bullseye)"
    VERSION_CODENAME=bullseye
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

    BTW, are the depthai and depthai-sdk two different installations? Do I have to update each separately?

    Thanks,
    Francis.

    • erik replied to this.

      Hello Erik, turned out that I have depthai version 2.15.0.0 and it is now updated to 2.19.1.0. The Critical error went away but I now have a different error and the pipeline seemed to be stuck and would not finish:

      francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
      [18443010F1AECE1200] [1.1.1] [1.319] [ImageManip(1)] [error] Not possible to create warp params. Error: WARP_SWCH_ERR_UNSUPORTED_IMAGE_FORMAT 
      
      [18443010F1AECE1200] [1.1.1] [1.319] [ImageManip(1)] [error] Invalid configuration or input image - skipping frame

      Any idea? BTW, I tested the direct xLinkIn to xLinkOut code with the updated depthai and it is still working as before. The video was read from the file and displayed with .imShow().

      Thanks,
      Francis.

      • erik replied to this.

        Hi FrancisTse ,
        Please zip all needed files and I can take a look at this.
        Thanks, Erik

        Hello Erik, how can I attach the zipped folder? I tried many times using the "Press or paste to upload" button on the bottom left but don't seem to be successful.

        • erik replied to this.

          Hi FrancisTse ,
          Perhaps on github issue, or google drive...
          Thanks, Erik