• DepthAIHardware
  • Out of Memory Issue with 4K Resolution and Point Cloud in DepthAI Pipeline

Hello everyone,

I've been working on a pipeline that uses a 4K resolution camera combined with point clouds, but I'm running into memory issues. Below is the pipeline code I've written, but I get an "Out of memory" error when creating the pool for point cloud frames.

Pipeline Code:

def create_pipeline(self):

pipeline = dai.Pipeline()

# Camera nodes

camRgb = pipeline.create(dai.node.ColorCamera)

monoLeft = pipeline.create(dai.node.MonoCamera)

monoRight = pipeline.create(dai.node.MonoCamera)

depth = pipeline.create(dai.node.StereoDepth)

pointcloud = pipeline.create(dai.node.PointCloud)

# Output streams

rgbOut = pipeline.create(dai.node.XLinkOut)

depthOut = pipeline.create(dai.node.XLinkOut)

pclOut = pipeline.create(dai.node.XLinkOut)

rgbOut.setStreamName("rgb")

depthOut.setStreamName("depth")

pclOut.setStreamName("pcl")

# Camera configurations

camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)

camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)

camRgb.setIspScale(1, 1)

camRgb.setFps(30)

monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)

monoLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)

monoLeft.setFps(30)

monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)

monoRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)

monoRight.setFps(30)

# Depth configuration

depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)

depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_3x3) # Reduced memory usage

depth.setLeftRightCheck(True)

depth.setExtendedDisparity(True) # Optimizes memory usage for distant objects

depth.setSubpixel(False) # Reduces complexity for lower memory consumption

depth.setDepthAlign(dai.CameraBoardSocket.CAM_A)

# Linking mono cameras and depth

monoLeft.out.link(depth.left)

monoRight.out.link(depth.right)

depth.depth.link(pointcloud.inputDepth)

# Asynchronous outputs

camRgb.isp.link(rgbOut.input)

pointcloud.outputPointCloud.link(pclOut.input)

depth.depth.link(depthOut.input)

# Reduce buffer size for XLinkOut nodes

pclOut.input.setBlocking(False)

pclOut.input.setQueueSize(1) # Limit the Point Cloud buffer to 1 frame

rgbOut.input.setBlocking(False)

rgbOut.input.setQueueSize(1) # Limit the RGB buffer to 1 frame

depthOut.input.setBlocking(False)

depthOut.input.setQueueSize(1) # Limit the Depth buffer to 1 frame

return pipeline

The Error I Get:

===Connected to 1844301091399F1200

MXID: 1844301091399F1200

Num of cameras: 3

USB speed: UsbSpeed.SUPER

Board name: OAK-D-LITE

[1844301091399F1200] [3.1.1] [1.219] [PointCloud(4)] [error] Out of memory while creating pool for 'point cloud' frames. Number of frames: 4 each with size: 99532800B

===Connected to 19443010018FF31200

MXID: 19443010018FF31200

Num of cameras: 3

USB speed: UsbSpeed.SUPER

Board name: OAK-D-LITE

[19443010018FF31200] [3.1.2] [1.192] [PointCloud(4)] [error] Out of memory while creating pool for 'point cloud' frames. Number of frames: 4 each with size: 99532800B

Does anyone have suggestions on how to fix this memory issue while still maintaining 4K resolution and point cloud processing? Any advice or tips would be greatly appreciated!

Thanks in advance!

    jakaskerl
    Hey, thank you for the quick response! I made the changes you suggested. Now I'm getting the following error:
    [1844301091399F1200] [1.6.1.4.4] [1.048] [PointCloud(4)] [error] Depth frame with 3840 width is not yet supported in PointCloud.

      David_12_RE
      Well, you can't run depth (and possibly pointclouds) at max resolution since RVC2 only supports a width of 1280.

      Thanks,
      Jaka

      Hey Jack, thank you very much for your support! I apologize for asking so many questions; I'm relatively new to this field. What would be the optimal pipeline for me if I want 4K images but the point cloud in a lower resolution? Here’s what I've programmed so far:
      def create_pipeline(self):

          pipeline = dai.Pipeline()
      
          
      
          # Create camera nodes
      
          camRgb = pipeline.create(dai.node.ColorCamera)
      
          monoLeft = pipeline.create(dai.node.MonoCamera)
      
          monoRight = pipeline.create(dai.node.MonoCamera)
      
          depth = pipeline.create(dai.node.StereoDepth)
      
          pointcloud = pipeline.create(dai.node.PointCloud)
      
          sync = pipeline.create(dai.node.Sync)
      
          xOut = pipeline.create(dai.node.XLinkOut)
      
          xOut.input.setBlocking(False)
      
          # Set 4K resolution for the RGB camera
      
          camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
      
          camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
      
          camRgb.setFps(30)
      
          
      
          # Set 400P resolution for the mono cameras
      
          monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      
          monoLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
      
          monoLeft.setFps(30)
      
          
      
          monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      
          monoRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
      
          monoRight.setFps(30)
      
          # Configure depth and point cloud
      
          depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)
      
          depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
      
          depth.setLeftRightCheck(True)
      
          depth.setExtendedDisparity(False)
      
          depth.setSubpixel(True)
      
          depth.setDepthAlign(dai.CamerwaBoardSocket.CAM_A)
      
          
      
          # Set depth output resolution to 1280x720 for point cloud processing
      
          depth.setOutputSize(1280, 720)
      
          pointcloud.setNumFramesPool(1)
      
          # Link depth to point cloud and synchronize outputs
      
          monoLeft.out.link(depth.left)
      
          monoRight.out.link(depth.right)
      
          depth.depth.link(pointcloud.inputDepth)
      
          camRgb.isp.link(sync.inputs["rgb"])
      
          pointcloud.outputPointCloud.link(sync.inputs["pcl"])
      
          sync.out.link(xOut.input)
      
          xOut.setStreamName("out")
      
          # Configure additional outputs for mono left and right cameras
      
          leftQueue = pipeline.create(dai.node.XLinkOut)
      
          leftQueue.setStreamName("left")
      
          monoLeft.out.link(leftQueue.input)
      
          rightQueue = pipeline.create(dai.node.XLinkOut)
      
          rightQueue.setStreamName("right")
      
          monoRight.out.link(rightQueue.input)
      
          return pipeline

        David_12_RE
        Your current implementation is fine. Make sure the depth size is set to 400P, otherwise you are doing upscaling which is heavy on the device.

        Thanks,
        Jaka