I believe I have asked this question before but I am also trying to run these python scripts( pointcloud_control.py and visualize_pointcloud.py) from the depthai-python github repo. Is there any way for me to fix this issue?

[Open3D WARNING] The number of points is 0 when creating axis-aligned bounding box.

[14442C10E133B3CF00] [169.254.1.222] [6.994] [StereoDepth(3)] [error] Disparity/depth width must be multiple of 16, but RGB camera width is 427. Set output size explicitly using 'setOutputSize(width, height)'.

[14442C10E133B3CF00] [169.254.1.222] [1720528918.440] [host] [warning] Monitor thread (device: 14442C10E133B3CF00 [169.254.1.222]) - ping was missed, closing the device connection

[14442C10E133B3CF00] [169.254.1.222] [1720528928.189] [host] [warning] Device crashed, but no crash dump could be extracted.

Traceback (most recent call last):

File "c:\Users\deg6fni\Desktop\test_run\pcl.py", line 81, in <module>

inMessage = q.get()

            ^^^^^^^

RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'out' (X_LINKERROR)'

    Hi gdeanrexroth
    Which device are you using (so I can fix the example)? Looks like the scaling/alignment changes the resolution to an unsupported size for the stereoDepth. Camera size is not scaled correctly it seems.

    Thanks,
    Jaka

      a month later

      gdeanrexroth
      Also I am using,
      depthai-sdk version: 1.15.0
      depthai-viewer version: 0.2.4

      with python 3.11.9. I have two point cloud scripts that work however I have done all that I could do to generate non spare point clouds. Yet everything is still very sparse and not populating every point cloud as possible.

        5 days later

        jakaskerl
        By chance have you updated the python script that from the luxonis webpage^^. It is both of the pointcloud python examples.

          jakaskerl

          My apolgies here is what I am referring to
          "Hi gdeanrexroth
          Which device are you using (so I can fix the example)? Looks like the scaling/alignment changes the resolution to an unsupported size for the stereoDepth. Camera size is not scaled correctly it seems.

          Thanks,
          Jaka"
          This is a reply of yours from a response. I believe incorrectly responded to your message. I have a a oak-d sr camera . I am using a PoE to connect the camera and my work laptop. You mentioned about adjusting the script that I mentioned above, so it would be compatible with my device. The error also states the inMessage…i've modified the script a numerous of times. Yet the results were always the same.

            gdeanrexroth
            The code in the issue I have sent above should work, no?

            Below is the full code with visualizer hardcoded in.

            #!/usr/bin/env python3
            
            import random
            import time
            from sys import maxsize
            
            import cv2
            import depthai as dai
            import open3d as o3d
            
            import open3d as o3d
            import numpy as np
            
            
            class PointCloudVisualizer:
                def __init__(self, intrinsic_matrix, width, height):
                    self.R_camera_to_world = np.array([[1, 0, 0], [0, -1, 0], [0, 0, -1]]).astype(
                        np.float64
                    )
                    self.depth_map = None
                    self.rgb = None
                    self.pcl = o3d.geometry.PointCloud()
            
                    self.pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic(
                        width,
                        height,
                        intrinsic_matrix[0][0],
                        intrinsic_matrix[1][1],
                        intrinsic_matrix[0][2],
                        intrinsic_matrix[1][2],
                    )
                    self.vis = o3d.visualization.Visualizer()
                    self.vis.create_window(window_name="Point Cloud")
                    self.vis.add_geometry(self.pcl)
                    origin = o3d.geometry.TriangleMesh.create_coordinate_frame(
                        size=0.3, origin=[0, 0, 0]
                    )
                    self.vis.add_geometry(origin)
                    view_control = self.vis.get_view_control()
                    view_control.set_constant_z_far(1000)
                    self.isstarted = False
            
                def rgbd_to_projection(self, depth_map, rgb, downsample=True, remove_noise=False):
                    rgb_o3d = o3d.geometry.Image(rgb)
                    depth_o3d = o3d.geometry.Image(depth_map)
            
                    rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
                        rgb_o3d,
                        depth_o3d,
                        convert_rgb_to_intensity=(len(rgb.shape) != 3),
                        depth_trunc=20000,
                        depth_scale=1000.0,
                    )
            
                    pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
                        rgbd_image, self.pinhole_camera_intrinsic
                    )
            
                    if downsample:
                        pcd = pcd.voxel_down_sample(voxel_size=0.01)
            
                    if remove_noise:
                        pcd = pcd.remove_statistical_outlier(30, 0.1)[0]
            
                    self.pcl.points = pcd.points
                    self.pcl.colors = pcd.colors
                    self.pcl.rotate(
                        self.R_camera_to_world, center=np.array([0, 0, 0], dtype=np.float64)
                    )
                    return self.pcl
            
                def visualize_pcd(self):
                    self.vis.update_geometry(self.pcl)
                    self.vis.poll_events()
                    self.vis.update_renderer()
            
                def close_window(self):
                    self.vis.destroy_window()
            
            
            COLOR = True
            
            lrcheck = True  # Better handling for occlusions
            extended = False  # Closer-in minimum depth, disparity range is doubled
            subpixel = True  # Better accuracy for longer distance, fractional disparity 32-levels
            
            # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7
            median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7
            
            print("StereoDepth config options:")
            print("    Left-Right check:  ", lrcheck)
            print("    Extended disparity:", extended)
            print("    Subpixel:          ", subpixel)
            print("    Median filtering:  ", median)
            
            pipeline = dai.Pipeline()
            
            colorLeft = pipeline.create(dai.node.ColorCamera)
            colorLeft.setPreviewSize(288, 288)
            colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)
            
            colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
            colorLeft.setInterleaved(False)
            colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
            
            
            # colorLeft.setIspScale(1, 2)
            
            colorRight = pipeline.create(dai.node.ColorCamera)
            colorRight.setPreviewSize(288, 288)
            colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
            colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)
            colorRight.setInterleaved(False)
            colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
            # colorRight.setIspScale(1, 2)
            
            
            print(f"left Isp size  = {colorLeft.getIspSize()}")
            print(f"left resolution = {colorLeft.getResolutionSize()}")
            print(f"left preview size = {colorLeft.getPreviewSize()}")
            print(f"left still size = {colorLeft.getStillSize()}")
            print(f"left video size = {colorLeft.getVideoSize()}")
            
            print("===============================================")
            
            print(f"right Isp size = {colorRight.getIspSize()}")
            print(f"right resolution = {colorRight.getResolutionSize()}")
            print(f"right preview size = {colorRight.getPreviewSize()}")
            print(f"Right still size = {colorLeft.getStillSize()}")
            print(f"right video size = {colorRight.getVideoSize()}")
            
            print("\n\n")
            
            
            stereo = pipeline.createStereoDepth()
            # stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
            stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)
            stereo.initialConfig.setMedianFilter(median)
            
            # stereo.setOutputSize(288, 288)
            
            
            # stereo.initialConfig.setConfidenceThreshold(255)
            
            stereo.setLeftRightCheck(lrcheck)
            stereo.setExtendedDisparity(extended)
            stereo.setSubpixel(subpixel)
            
            colorLeft.preview.link(stereo.left)
            colorRight.preview.link(stereo.right)
            
            config = stereo.initialConfig.get()
            
            ##########################################################
            
            config.postProcessing.speckleFilter.enable = False
            config.postProcessing.speckleFilter.speckleRange = 50
            config.postProcessing.temporalFilter.enable = True
            config.postProcessing.spatialFilter.enable = True
            config.postProcessing.spatialFilter.holeFillingRadius = 2
            config.postProcessing.spatialFilter.numIterations = 1
            config.postProcessing.thresholdFilter.maxRange = 2000
            config.postProcessing.decimationFilter.decimationFactor = 1
            
            ##########################################################
            
            
            stereo.initialConfig.set(config)
            
            xout_depth = pipeline.createXLinkOut()
            xout_depth.setStreamName("depth")
            stereo.depth.link(xout_depth.input)
            
            # xout_disparity = pipeline.createXLinkOut()
            # xout_disparity.setStreamName('disparity')
            # stereo.disparity.link(xout_disparity.input)
            
            xout_colorize = pipeline.createXLinkOut()
            xout_colorize.setStreamName("colorize")
            xout_rect_left = pipeline.createXLinkOut()
            xout_rect_left.setStreamName("rectified_left")
            xout_rect_right = pipeline.createXLinkOut()
            xout_rect_right.setStreamName("rectified_right")
            
            if COLOR:
                colorLeft.preview.link(xout_colorize.input)
            else:
                stereo.rectifiedRight.link(xout_colorize.input)
            
            stereo.rectifiedLeft.link(xout_rect_left.input)
            stereo.rectifiedRight.link(xout_rect_right.input)
            
            
            class HostSync:
                def __init__(self):
                    self.arrays = {}
            
                def add_msg(self, name, msg):
                    if not name in self.arrays:
                        self.arrays[name] = []
                    # Add msg to array
                    self.arrays[name].append({"msg": msg, "seq": msg.getSequenceNum()})
            
                    synced = {}
                    for name, arr in self.arrays.items():
                        for i, obj in enumerate(arr):
                            if msg.getSequenceNum() == obj["seq"]:
                                synced[name] = obj["msg"]
                                break
                    # If there are 5 (all) synced msgs, remove all old msgs
                    # and return synced msgs
                    if len(synced) == 4:  # color, left, right, depth, nn
                        # Remove old msgs
                        for name, arr in self.arrays.items():
                            for i, obj in enumerate(arr):
                                if obj["seq"] < msg.getSequenceNum():
                                    arr.remove(obj)
                                else:
                                    break
                        return synced
                    return False
            
            
            file_num = 0
            with dai.Device(pipeline) as device:
            
                # device.setIrLaserDotProjectorBrightness(1200)
                qs = []
                qs.append(device.getOutputQueue("depth", maxSize=1, blocking=False))
                qs.append(device.getOutputQueue("colorize", maxSize=1, blocking=False))
                qs.append(device.getOutputQueue("rectified_left", maxSize=1, blocking=False))
                qs.append(device.getOutputQueue("rectified_right", maxSize=1, blocking=False))
            
                calibData = device.readCalibration()
                if COLOR:
                    w, h = colorLeft.getIspSize()
                    print(f"Width = {w}, Height = {h}")
                    # intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
                    intrinsics = calibData.getCameraIntrinsics(
                        dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h)
                    )
                else:
                    w, h = colorRight.getResolutionSize()
            
                    intrinsics = calibData.getCameraIntrinsics(
                        dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h)
                    )
                    print(f"testing = {w}, {h}")
            
                pcl_converter = PointCloudVisualizer(intrinsics, w, h)
            
                serial_no = device.getMxId()
                sync = HostSync()
                depth_vis, color, rect_left, rect_right = None, None, None, None
            
                while True:
                    for q in qs:
                        new_msg = q.tryGet()
                        if new_msg is not None:
                            msgs = sync.add_msg(q.getName(), new_msg)
                            if msgs:
            
                                depth = msgs["depth"].getFrame()
                                color = msgs["colorize"].getCvFrame()
                                rectified_left = msgs["rectified_left"].getCvFrame()
                                rectified_right = msgs["rectified_right"].getCvFrame()
            
                                depth_vis = cv2.normalize(
                                    depth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1
                                )
                                depth_vis = cv2.equalizeHist(depth_vis)
                                depth_vis = cv2.applyColorMap(depth_vis, cv2.COLORMAP_HOT)
            
                                # OPEN CV USEES BGR
                                bgr_image = cv2.cvtColor(color, cv2.COLOR_RGB2BGR)
            
                                # Display the BGR image
                                cv2.imshow("color", bgr_image)
                                cv2.imshow("rectified_left", rectified_left)
                                cv2.imshow("rectified_right", rectified_right)
            
                                rgb = cv2.cvtColor(color, cv2.COLOR_BGR2RGB)
                                pcl_converter.rgbd_to_projection(depth, rgb)
                                pcl_converter.visualize_pcd()
            
                    key = cv2.waitKey(1)
                    if key == ord("s"):
                        timestamp = str(int(time.time()))
                        num = random.randrange(900)
                        o3d.io.write_point_cloud(
                            f"{file_num}.pcd", pcl_converter.pcl, compressed=True
                        )
                        print(f"number for save = {file_num}")
            
                        file_num += 1
            
                    elif key == ord("q"):
                        break

              jakaskerl
              Thank you!
              That did work, however my depthai viewer now works properly. I made another post about it.
              The code you sent me is/was sparse yet I am changing some of the settings to match my desire results.