I am using the newly oak-d sr w/PoE connectivity to potently generate point clouds. Since I am on corporate network I am unable to fully download or access most things needed for the depthai viewer, githubs(all requiremnts.py files break my anaconda or simply never works). Has anyone been to able to generate point clouds using python scripts? If so what method did you guys used and how you did it? Most scripts I have ran so far are extremely sparse and laggy. While others are simply not running. The point cloud files from this site: https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/ all prompt me the same error,
"[14442C10A1063AD100] [169.254.1.222] [6.903] [StereoDepth(3)] [error] Disparity/depth width must be multiple of 16, but RGB camera width is 427. Set output size explicitly using 'setOutputSize(width, height)'.

[14442C10A1063AD100] [169.254.1.222] [1723831720.844] [host] [warning] Monitor thread (device: 14442C10A1063AD100 [169.254.1.222]) - ping was missed, closing the device connection

[14442C10A1063AD100] [169.254.1.222] [1723831730.116] [host] [warning] Device crashed, but no crash dump could be extracted.

Traceback (most recent call last):

File "s:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\luxonis_point_clouds\second.py", line 342, in <module>

**inMessage = q.get()**

            **^^^^^^^**

RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'out' (X_LINK_ERROR)'" I have never changed any of the code yet I am prompt these errors.
Any suggestions are welcome. I am working on a project at work that will utlize the camera to generate point clouds. The generated point clouds will be used to show the dimension of materials we placing somewehre. That is all. I have made scripts that cater to using the camera to capture the images and save them in a folder. All I need left to do is generate point clouds that are not noisy and not sparse.

    5 days later

    jakaskerl Thank you for referring me to the the post. I have been able to play around with it and it does generate the point clouds in color. I am still messing around with the configurations to what generates the best. You posted another link on a reply for me. I checked that link out to see what configuration fits best for me. One of two issues are the blurriness and the other is the amount of point clouds. I've changed the median filter and specklefactor. It did cause some effect but nothing to drastic

    I have tried the calibration process and it worked. But the above results are from the file referred.

      gdeanrexroth
      Great to see the results!

      gdeanrexroth one of two issues are the blurriness

      Blurriness on RGB or mono cameras? That could be a lens defect.

      gdeanrexroth nd the other is the amount of point clouds.

      Amount of points? Increase the confidence threshold and set preset to HIGH_DENSITY.

      Thanks,
      Jaka

        jakaskerl
        I will keep changing some of the configuration settings to see what gives me the best results. I just changed the settings to what you recommended. However it just slight changes. Here is the 1st half of the code that I am working with. As the remaining half doesn't change.

        The lines that are grayscale are lines that I have changed to give me the best results. This is the most progress that I have so far while working with the camera. My remaining goal is to simply have the above screesnshot to be more clearer. As my manager is wanting to use this tool to display a part or material. From there the target is to view the part without having to move the camera and it(part or material).
        import random

        import time

        from sys import maxsize

        import cv2

        import depthai as dai

        import open3d as o3d

        COLOR = True

        lrcheck = True # Better handling for occlusions

        extended = True # Closer-in minimum depth, disparity range is doubled

        subpixel = True # Better accuracy for longer distance, fractional disparity 32-levels

        # Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7

        median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7

        print("StereoDepth config options:")

        print(" Left-Right check: ", lrcheck)

        print(" Extended disparity:", extended)

        print(" Subpixel: ", subpixel)

        print(" Median filtering: ", median)

        pipeline = dai.Pipeline()

        colorLeft = pipeline.create(dai.node.ColorCamera)

        colorLeft.setPreviewSize(1280,720) #288,288

        colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)

        colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)

        colorLeft.setInterleaved(False)

        colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

        #colorLeft.setIspScale(1, 2)

        colorRight = pipeline.create(dai.node.ColorCamera)

        colorRight.setPreviewSize(1280,720) #288,288 -- 640,480 -- 1280,720

        colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)

        colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)

        colorRight.setInterleaved(False)

        colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

        #colorRight.setIspScale(1, 2)

        print(f'left Isp size = {colorLeft.getIspSize()}')

        print(f'left resolution = {colorLeft.getResolutionSize()}')

        print(f'left preview size = {colorLeft.getPreviewSize()}')

        print(f'left still size = {colorLeft.getStillSize()}')

        print(f'left video size = {colorLeft.getVideoSize()}')

        print('===============================================')

        print(f'right Isp size = {colorRight.getIspSize()}')

        print(f'right resolution = {colorRight.getResolutionSize()}')

        print(f'right preview size = {colorRight.getPreviewSize()}')

        print(f'Right still size = {colorLeft.getStillSize()}')

        print(f'right video size = {colorRight.getVideoSize()}')

        print("\n\n")

        stereo = pipeline.createStereoDepth()

        stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)

        # stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)

        stereo.initialConfig.setMedianFilter(median)

        #stereo.setOutputSize(288, 288)

        stereo.initialConfig.setConfidenceThreshold(200)

        stereo.setLeftRightCheck(lrcheck)

        stereo.setExtendedDisparity(extended)

        stereo.setSubpixel(subpixel)

        colorLeft.preview.link(stereo.left)

        colorRight.preview.link(stereo.right)

        config = stereo.initialConfig.get()

        ##########################################################

        config.postProcessing.speckleFilter.enable = True
        #set line 295 to false
        config.postProcessing.speckleFilter.speckleRange = 50
        config.postProcessing.temporalFilter.enable = True
        config.postProcessing.spatialFilter.enable = True
        # set line 298 to true
        config.postProcessing.spatialFilter.holeFillingRadius = 2
        config.postProcessing.spatialFilter.numIterations = 1
        config.postProcessing.thresholdFilter.maxRange = 2000
        config.postProcessing.decimationFilter.decimationFactor = 1