I am using the oak-d-sr-poe camera to capture point cloud and pcd information. I am have a written a script that combined the depth pcd images that i have captured, to form one point cloud. However from the top view, I am seeing layers. I would expected to the see points of objects from the top and side. While using the depthai-viewer to display a point cloud I did not experience this issue. However using others point cloud files gave me a similar result. My goal is to display the point cloud from all angles. The front view is sparse but if I zoom in, I am some what able to see the object. I am trying to do the same from the top and sides.

The screenshots are what I am referencing to. If I need to provide the scripts that I used, then I will do so.

    gdeanrexroth
    I adjusted my focal length for both fx,fy. Both were randomly set at 800 as most examples I saw were at that level. However lowered to 235
    # Camera intrinsic parameters (adjust based on your camera calibration)

    fx = 235 # Focal length in pixels (x-axis)

    fy = 235  # Focal length in pixels (y-axis)

    cx = 400  # Principal point (x-axis)

    cy = 400  # Principal point (y-axis)

    Here is the updated the result

    I am still reading more documentation about this but correct me if i am wrong. Is the focal length for this particular camera found by calibrating it?

      gdeanrexroth
      Yes, the focal lengths are part of intrinsics which are computed during calibration. When accessing the values, make sure you use useSpec=False so actual calculated values get used instead of datasheet specified.

      Thanks
      Jaka

        jakaskerl
        That makes sense based off the documentation. Should I refer to this link:Calibration (luxonis.com)?
        My other question is more so toward ToF vs StereoDepth. I am still using the scripts I created earlier to capture images in pcd format. Then I am passing those images through a python script that combines them and generate one point cloud. However as I am getting more familiar with the nodes and pipelines. In regards of generating/creating point cloud data, is it better for me to use ToF or StereoDepth or both? Currently my script is set up like this:
        mono_left = pipeline.createMonoCamera()

        mono_left.setBoardSocket(depthai.CameraBoardSocket.LEFT)

        mono_left.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_800_P)

        mono_right = pipeline.createMonoCamera()

        mono_right.setBoardSocket(depthai.CameraBoardSocket.RIGHT)

        mono_right.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_800_P)

        # Stereo depth

        stereo = pipeline.createStereoDepth()

        stereo.setOutputDepth(True)

        stereo.setConfidenceThreshold(200)

        mono_left.out.link(stereo.left)

        mono_right.out.link(stereo.right)

        # XLinkOut

        xout_depth = pipeline.createXLinkOut()

        xout_depth.setStreamName("depth")

        stereo.depth.link(xout_depth.input)

        # Connect to the device and start the pipeline

        with depthai.Device(pipeline) as device:

        q_depth = device.getOutputQueue(name="depth", maxSize=4, blocking=False)
        
        capture_count = 0
        
        while True:
        
            in_depth = q_depth.get()
        
            if in_depth is not None:
        
                depth_frame = in_depth.getFrame()
        
                depth_frame_colored = cv2.applyColorMap(cv2.convertScaleAbs(depth_frame, alpha=0.03), cv2.COLORMAP_JET)
        
                cv2.imshow("Depth", depth_frame_colored)
        
                k = cv2.waitKey(1) & 0xFF
        
                if k == 27:  # Press esc to close window
        
                    print('Window closing!')
        
                    break
        
                elif k == 99:  # Press c to capture point cloud
        
                    capture_count += 1
        
                    # Create a Point Cloud from the depth data
        
                    depth_array = np.asanyarray(depth_frame)
        
                    height, width = depth_array.shape
        
                    # Convert depth image to point cloud
        
                    fx, fy = 800, 800  # Focal length, adjust based on your camera calibration
        
                    cx, cy = width / 2, height / 2  # Principal point
        
                    points = []
        
                    for v in range(height):
        
                        for u in range(width):
        
                            z = depth_array[v, u] / 1000.0  # depth scaling factor (adjust as needed)
        
                            if z > 0:
        
                                x = (u - cx) \* z / fx
        
                                y = (v - cy) \* z / fy
        
                                points.append([x, y, z])
        
                    pcd = o3d.geometry.PointCloud()
        
                    pcd.points = o3d.utility.Vector3dVector(points)

          gdeanrexroth That makes sense based off the documentation. Should I refer to this link:Calibration (luxonis.com)?

          This link is for TOF extrinsics calibration.

          gdeanrexroth is it better for me to use ToF or StereoDepth or both?

          Depends, I'd say both if you can manage to merge the pointclouds and have enough compute power on host.

          gdeanrexroth stereo.depth.link(xout_depth.input)

          Why not use a pointcloud node? https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/

          gdeanrexroth I was researching on intrinsic information and came across this link:https://docs.oakchina.cn/projects/api/samples/calibration/calibration_reader.html

          https://docs.luxonis.com/software/depthai/examples/calibration_reader/ updated.

          gdeanrexroth And to clarify, my camera is not yet calibrated.

          Camera should be calibrated in factory since you are using a compact device (SR POE). The intrinsics should be correct.

          Thanks,
          Jaka