jakaskerl
Ahh okay, i misunderstood. I will try both methods of adding the node or just converting depth to point cloud. The JSON created a the calibration file after the code you gave me ran. Do I take the intrinsic parameters from there or do I have use the parameters from the calibration.py 🙂 ? Below is an example script that I am using to display my depth color image and my .npy file which then converts the depth image into a point cloud. I am still unfamiliar with .npy files, is it neccesary for me to include into this script:

import matplotlib.image as mpimg

import re

import open3d as o3d

import numpy as np

import matplotlib.pyplot as plt

# Define camera intrinsic parameters for both RGB and TOF

RGB_INTRINSICS = np.array([

[842.6837768554688, 0.0, 673.1340942382812],

[0.0, 851.867431640625, 412.4818115234375],

[0.0, 0.0, 1.0]])

TOF_INTRINSICS = np.array([

[494.3519287109375, 0.0, 321.8478088378906],

[0.0, 499.4835205078125, 258.3044128417969],

[0.0, 0.0, 1.0]

])

# This is a special function used for reading NYU pgm format

# as it is written in big endian byte order.

def read_nyu_pgm(filename, byteorder='>'):

with open(filename, 'rb') as f:

    buffer = f.read()

try:

    header, width, height, maxval = re.search(

        b"(^P5\\s(?:\\s\*#.\*[\\r\\n])\*"

        b"(\\d+)\\s(?:\\s\*#.\*[\\r\\n])\*"

        b"(\\d+)\\s(?:\\s\*#.\*[\\r\\n])\*"

        b"(\\d+)\\s(?:\\s\*#.\*[\\r\\n]\\s)\*)", buffer).groups()

except AttributeError:

    raise ValueError("Not a raw PGM file: '%s'" % filename)

img = np.frombuffer(buffer,

                    dtype=byteorder + 'u2',

                    count=int(width) \* int(height),

                    offset=len(header)).reshape((int(height), int(width)))

img_out = img.astype('u2')

return img_out

print("Read dataset")

# Use your specified paths for RGB and depth images

color_raw = o3d.io.read_image(r"S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\data\20241023_132728\rgb_img_000005.png")

depth_array = np.load(r"S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\data\20241023_132728\depth_stereo_000004.npy")

# Convert numpy depth array to Open3D image

depth_raw = o3d.geometry.Image(depth_array.astype(np.float32))

color = o3d.geometry.Image(np.asarray(color_raw))

# Create an Open3D RGBD image using your data

rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(

color_raw, depth_raw, convert_rgb_to_intensity=False)

print(rgbd_image)

plt.subplot(1, 2, 1)

plt.title('RGB image')

plt.imshow(rgbd_image.color)

plt.subplot(1, 2, 2)

plt.title('Depth image')

plt.imshow(rgbd_image.depth)

plt.show()

# Create point cloud from RGBD image using custom camera intrinsics

intrinsic = o3d.camera.PinholeCameraIntrinsic(

width=1280,

height=800,

fx=RGB_INTRINSICS[0, 0],

fy=RGB_INTRINSICS[1, 1],

cx=RGB_INTRINSICS[0, 2],

cy=RGB_INTRINSICS[1, 2]

)

pcd = o3d.geometry.PointCloud.create_from_rgbd_image(

rgbd_image,

intrinsic)

# Flip it, otherwise the pointcloud will be upside down

pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])

o3d.visualization.draw_geometries([pcd])

Outcome of script that CREATES the point cloud. And yes this before me trying the point cloud node. Intrinsincs are in but I am unsure if they are placed in the right area or format. Thank you for your reply 🙂 :

10/25/2024 8:17AM
this is my attempt to insert the point cloud while subpixel is set to True.

Create StereoDepth node

stereo = pipeline.create(dai.node.StereoDepth)

stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)

stereo.initialConfig.setMedianFilter(dai.MedianFilter.MEDIAN_OFF)

stereo.setLeftRightCheck(True)

stereo.setExtendedDisparity(False)

stereo.setSubpixel(True) # this was False at first but Jaka recommended True to get smoother point cloud

stereo.setDepthAlign(ALIGN_SOCKET)

# Link the RAW outputs of the ColorCameras to the StereoDepth node

colorLeft.isp.link(stereo.left)

colorRight.isp.link(stereo.right)

pc_node = pipeline.create(dai.node.PointCloud)

stereo.depth.link(pc_node.depth)

pc_out = pipeline.create(dai.node.XLinkOut)

pc_out.setStreamName("pc_out")

pc_node.out.link(pc_out.input)

# Create Sync node

sync = pipeline.create(dai.node.Sync)

sync.setSyncThreshold(timedelta(milliseconds=50))

# Link outputs to Sync node with specified input names

tof.depth.link(sync.inputs["depth_tof"])

stereo.depth.link(sync.inputs["depth_stereo"])

stereo.rectifiedLeft.link(sync.inputs["left_img"])

stereo.rectifiedRight.link(sync.inputs["right_img"])

colorLeft.isp.link(sync.inputs["rgb_img"])  # Corrected line

# Create XLinkOut node

xout = pipeline.create(dai.node.XLinkOut)

xout.setStreamName("sync_out")

sync.out.link(xout.input)

return pipeline

    gdeanrexroth Do I take the intrinsic parameters from there or do I have use the parameters from the calibration.py 🙂 ?

    Take intrinsics from eeprom calibration.

    Thanks,
    Jaka

      jakaskerl
      Okay I have been using the those numbers within the file that creates the point cloud. I am still trying to use ICP to stitch the point clouds. My results vary but overall are still giving the similar results.

      I am using a simple script that just displays the .ply file which allow me to crop them. But here is the version of the script that i am using.
      import numpy as np

      import open3d as o3d

      def demo_crop_geometry():

      print("Demo for manual geometry cropping")
      
      print("1) Press 'Y' twice to align geometry with negative direction of y-axis")
      
      print("2) Press 'K' to lock screen and to switch to selection mode")
      
      print("3) Drag for rectangle selection,")
      
      print("   or use ctrl + left click for polygon selection")
      
      print("4) Press 'C' to get a selected geometry")
      
      print("5) Press 'S' to save the selected geometry")
      
      print("6) Press 'F' to switch to freeview mode")
      
      
      
      # Load a single point cloud file
      
      # pcd = o3d.io.read_point_cloud(r'S:\\DEPT\\SVM4\\Shared\\Crossfunctional_Work\\Projects\\DepthCameras\\LuxonisDepthAI\\test_run\\Image and PointCloud of Material For AutoStore\\capture_1_2024_10_21_12_06_01\\tof_pointcloud_1.ply')
      
      # pcd = o3d.io.read_point_cloud(r"S:\\DEPT\\SVM4\\Shared\\Crossfunctional_Work\\Projects\\DepthCameras\\LuxonisDepthAI\\test_run\\GT_CUP\\MANUALLY_MODIFIED\\tof_pointcloud_5.ply")
      
      pcd = o3d.io.read_point_cloud(r'S:\\DEPT\\SVM4\\Shared\\Crossfunctional_Work\\Projects\\DepthCameras\\LuxonisDepthAI\\test_run\\GT_CUP\\MANUALLY_MODIFIED\\cropped_cup_5.ply')
      
      # Visualize and edit the point cloud
      
      o3d.visualization.draw_geometries_with_editing([pcd])

      if name == "main":

      demo_crop_geometry()

      @jakaskerl
      overall for my project this is the ending result my team and I are looking for.

      I took this example point cloud from the ICP registration website. But we are still struggling to get a result that resembles that. Even at the bare minimum state I am struggling to display/ distribute a point cloud that resembles the object that is in front of the camera.

      A minor issue that I am experiencing is with the numba library. Every so often I will quit the code so i can view the png's and .npy files. I will run the code again and it will display this error message
      "2.27.0.0

      Traceback (most recent call last):

      File "s:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\tree_s.py", line 9, in <module>

      from numba import jit, prange

      ModuleNotFoundError: No module named 'numba'"
      But whenever it does work, it displays the current depthai that I have installed which is 2.28.0.0 and successfully runs the code. I am sure that this conflicting issue is on my end. But I am bringing it to your attention.

      EDIT
      My last struggle is adding the point cloud node to the script that you gave me. I followed the instructions that was listed on the website however I am struggling to enable it within the script correctly.

      Here is another point cloud displayed in meshlab application:

        gdeanrexroth

        gdeanrexroth My last struggle is adding the point cloud node to the script that you gave me. I followed the instructions that was listed on the website however I am struggling to enable it within the script correctly.

        I'm not entirely sure what you mean. How do you want to enable it and where should it show up?

        Thanks,
        Jaka

          jakaskerl
          Maybe I misunderstood your previous response. Whenever you mentioned that I could add the point cloud node into the updated undistorted script to visualize it. I am attempting to that but I am unsuccessful in creating the point cloud node.

          On another note, I have tried to use depthai viewer to see if the point cloud looks any different. However it updates or installs everytime i run the command "python -m depthai_viewer". it successfully installs yet it does not recognize the camera. I will try to update my depthai-viewer and follow instructions again but it was working two months. I haven't used it since then.

          gdeanrexroth
          @jakaskerl
          And to come back to this post, I was unable to get the point clouds from this script to actually work. More so the script ran but it and i was able to still capture depth images and .npy files. They display point cloud that are layered. As someone pointed out, the example videos that you guys displayed showed layers within that point cloud. As i saw a comment of yours on another thread concluded that there isn't much more to do to make it smoother. However once i convert the depth images and .npy files to point cloud. It layered extremely. The script that creates the point cloud has the rgb and tof intrinsics in them. Still it displays in a unusual fashion.

          jakaskerl
          So I was able to only stream the rgb-depth align window and this was the results. I mentioned before that I was going to attempt to add in the point cloud node like you suggested. I am still working on it.

          For the results below. I commented out all the things that are in the screenshot below. Can you or your team try this method? I think the alignment was at a still state due to lag. The multiple streams slowed down the live stream, since it had to process a lot.

            gdeanrexroth
            You can remove the left and right (and RGB) streams if you dont need to view them. This should reduce the lag.

            If TOF-DEPTH alignment works and aligning to RGB doesn't there is something wrong with intinsics/extrinsics of the RGB.
            Can you also send the full code (not a screenshot) so I can check the whole thing you are using.

            Thanks,
            Jaka

              jakaskerl
              Here is the full code. I am debating on calibrating the camera again, because my issue could just be bad calibration. However, I followed the calibration instructions thoroughly and I was given the results once done.
              :
              import os

              import time

              import json

              import cv2

              import depthai as dai

              import numpy as np

              from datetime import timedelta

              from numba import jit, prange

              # Function definitions remain the same as before

              # ...

              @jit(nopython=True, parallel=True)

              def reprojection(depth_image, depth_camera_intrinsics, camera_extrinsics, color_camera_intrinsics, depth_image_show = None):

              height = len(depth_image)
              
              width = len(depth_image[0])
              
              if depth_image_show is not None:
              
                  image = np.zeros((height, width), np.uint8)
              
              else:
              
                  image = np.zeros((height, width), np.uint16)
              
              if(camera_extrinsics[0][3] > 0):
              
                  sign = 1
              
              else:
              
                  sign = -1
              
              for i in prange(0, height):
              
                  for j in prange(0, width):
              
                      if sign == 1:
              
                          # Reverse the order of the pixels
              
                          j = width - j - 1
              
                      d = depth_image[i][j]
              
                      if(d == 0):
              
                          continue
              
                      # Convert pixel to 3d point
              
                      x = (j - depth_camera_intrinsics[0][2]) \* d / depth_camera_intrinsics[0][0]
              
                      y = (i - depth_camera_intrinsics[1][2]) \* d / depth_camera_intrinsics[1][1]
              
                      z = d
              
                      # Move the point to the camera frame
              
                      x1 = camera_extrinsics[0][0] \* x + camera_extrinsics[0][1] \* y + camera_extrinsics[0][2] \* z + camera_extrinsics[0][3]
              
                      y1 = camera_extrinsics[1][0] \* x + camera_extrinsics[1][1] \* y + camera_extrinsics[1][2] \* z + camera_extrinsics[1][3]
              
                      z1 = camera_extrinsics[2][0] \* x + camera_extrinsics[2][1] \* y + camera_extrinsics[2][2] \* z + camera_extrinsics[2][3]
              
                      u = color_camera_intrinsics[0][0] \* (x1  / z1) + color_camera_intrinsics[0][2]
              
                      v = color_camera_intrinsics[1][1] \* (y1  / z1) + color_camera_intrinsics[1][2]
              
                      int_u = round(u)
              
                      int_v = round(v)
              
                      if(int_v != i):
              
                          print(f'v -> {v} and i -> {i}') # This should never be printed
              
                      if int_u >= 0 and int_u < (len(image[0]) - 1) and int_v >= 0 and int_v < len(image):
              
                          if depth_image_show is not None:
              
                              image[int_v][int_u] = depth_image_show[i][j][0]
              
                              image[int_v][int_u + sign] = depth_image_show[i][j][0]
              
                          else:
              
                              image[int_v][int_u] = z1
              
                              image[int_v][int_u + sign] = z1
              
              return image

              def create_pipeline(ALIGN_SOCKET):

              pipeline = dai.Pipeline()
              
              # Create ToF node
              
              tof = pipeline.create(dai.node.ToF)
              
              tof.setNumShaves(4)
              
              # Configure the ToF node
              
              tofConfig = tof.initialConfig.get()
              
              tofConfig.enableFPPNCorrection = True
              
              tofConfig.enableOpticalCorrection = True
              
              tofConfig.enableWiggleCorrection = True
              
              tofConfig.enableTemperatureCorrection = True
              
              tofConfig.phaseUnwrappingLevel = 4
              
              tof.initialConfig.set(tofConfig)
              
              # Input for ToF configuration
              
              xinTofConfig = pipeline.create(dai.node.XLinkIn)
              
              xinTofConfig.setStreamName("tofConfig")
              
              xinTofConfig.out.link(tof.inputConfig)
              
              # Create ToF camera node
              
              cam_tof = pipeline.create(dai.node.Camera)
              
              cam_tof.setFps(20)
              
              cam_tof.setImageOrientation(dai.CameraImageOrientation.ROTATE_180_DEG)
              
              cam_tof.setBoardSocket(dai.CameraBoardSocket.CAM_A)
              
              cam_tof.raw.link(tof.input)
              
              # Create ColorCamera nodes for stereo pair
              
              colorLeft = pipeline.create(dai.node.ColorCamera)
              
              colorRight = pipeline.create(dai.node.ColorCamera)
              
              # Configure ColorCameras
              
              colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
              
              colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
              
              colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
              
              colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
              
              colorLeft.setFps(20)
              
              colorRight.setFps(20)
              
              colorLeft.setInterleaved(False)
              
              colorRight.setInterleaved(False)
              
              colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
              
              colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
              
              #colorLeft.setImageOrientation(dai.CameraImageOrientation.NORMAL)
              
              #colorRight.setImageOrientation(dai.CameraImageOrientation.NORMAL)
              
              #colorLeft.setIspScale(2, 2)  # Corrected line
              
              #colorRight.setIspScale(2, 1)  # Corrected line
              
              # Create StereoDepth node
              
              stereo = pipeline.create(dai.node.StereoDepth)
              
              stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
              
              stereo.initialConfig.setMedianFilter(dai.MedianFilter.MEDIAN_OFF)
              
              stereo.setLeftRightCheck(True)
              
              stereo.setExtendedDisparity(False)
              
              stereo.setSubpixel(False)
              
              stereo.setDepthAlign(ALIGN_SOCKET)
              
              # Link the RAW outputs of the ColorCameras to the StereoDepth node
              
              colorLeft.isp.link(stereo.left)
              
              colorRight.isp.link(stereo.right)
              
              # Create Sync node
              
              sync = pipeline.create(dai.node.Sync)
              
              sync.setSyncThreshold(timedelta(milliseconds=50))
              
              # Link outputs to Sync node with specified input names
              
              tof.depth.link(sync.inputs["depth_tof"])
              
              stereo.depth.link(sync.inputs["depth_stereo"])
              
              stereo.rectifiedLeft.link(sync.inputs["left_img"])
              
              stereo.rectifiedRight.link(sync.inputs["right_img"])
              
              colorLeft.isp.link(sync.inputs["rgb_img"])  # Corrected line
              
              # Create XLinkOut node
              
              xout = pipeline.create(dai.node.XLinkOut)
              
              xout.setStreamName("sync_out")
              
              sync.out.link(xout.input)
              
              return pipeline

              def get_calib(RGB_SOCKET, ALIGN_SOCKET, depthSize, rgbSize):

              calibData = device.readCalibration2()
              
              M1 = np.array(calibData.getCameraIntrinsics(ALIGN_SOCKET, \*depthSize))
              
              D1 = np.array(calibData.getDistortionCoefficients(ALIGN_SOCKET))
              
              M2 = np.array(calibData.getCameraIntrinsics(RGB_SOCKET, \*rgbSize))
              
              D2 = np.array(calibData.getDistortionCoefficients(RGB_SOCKET))
              
              try:
              
                  T = (
              
                      np.array(calibData.getCameraTranslationVector(ALIGN_SOCKET, RGB_SOCKET, False))
              
                      \* 10
              
                  )  # to mm for matching the depth
              
              except:
              
                  T = np.array([0.0, 0.0, 0.001])
              
              try:
              
                  R = np.array(calibData.getCameraExtrinsics(ALIGN_SOCKET, RGB_SOCKET, False))[
              
                  0:3, 0:3
              
                  ]
              
              except:
              
                  R = np.eye(3)
              
              TARGET_MATRIX = M1
              
              lensPosition = calibData.getLensPosition(RGB_SOCKET)
              
              return M1, D1, M2, D2, T, R, TARGET_MATRIX

              def getAlignedDepth(frameDepth, M1, D1, M2, D2, T, R, TARGET_MATRIX, depthSize,rgbSize):

              R1, R2, _, _, _, _, _ = cv2.stereoRectify(M1, D1, M2, D2, (100, 100), R, T)  # The (100,100) doesn't matter as it is not used for calculating the rotation matrices
              
              leftMapX, leftMapY = cv2.initUndistortRectifyMap(M1, None, R1, TARGET_MATRIX, depthSize, cv2.CV_32FC1)
              
              depthRect = cv2.remap(frameDepth, leftMapX, leftMapY, cv2.INTER_NEAREST)
              
              newR = np.dot(R2, np.dot(R, R1.T))  # Should be very close to identity
              
              newT = np.dot(R2, T)
              
              combinedExtrinsics = np.eye(4)
              
              combinedExtrinsics[0:3, 0:3] = newR
              
              combinedExtrinsics[0:3, 3] = newT
              
              depthAligned = reprojection(depthRect, TARGET_MATRIX, combinedExtrinsics, TARGET_MATRIX)
              
              # Rotate the depth to the RGB frame
              
              R_back = R2.T
              
              mapX, mapY = cv2.initUndistortRectifyMap(TARGET_MATRIX, None, R_back, M2, rgbSize, cv2.CV_32FC1)
              
              outputAligned = cv2.remap(depthAligned, mapX, mapY, cv2.INTER_NEAREST)
              
              return outputAligned

              MIN_DEPTH = 500 # mm

              MAX_DEPTH = 10000 # mm

              def colorizeDepth(frameDepth, minDepth=MIN_DEPTH, maxDepth=MAX_DEPTH):

              invalidMask = frameDepth == 0
              
              # Log the depth, minDepth and maxDepth
              
              logDepth = np.log(frameDepth, where=frameDepth != 0)
              
              logMinDepth = np.log(minDepth)
              
              logMaxDepth = np.log(maxDepth)
              
              depthFrameColor = np.interp(logDepth, (logMinDepth, logMaxDepth), (0, 255)).astype(
              
                  np.uint8
              
              )
              
              depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_JET)
              
              # Set invalid depth pixels to black
              
              depthFrameColor[invalidMask] = 0
              
              return depthFrameColor

              RGB_SOCKET = dai.CameraBoardSocket.RGB

              TOF_SOCKET = dai.CameraBoardSocket.CAM_A

              LEFT_SOCKET = dai.CameraBoardSocket.LEFT

              RIGHT_SOCKET = dai.CameraBoardSocket.RIGHT

              ALIGN_SOCKET = RIGHT_SOCKET

              COLOR_RESOLUTION = dai.ColorCameraProperties.SensorResolution.THE_1080_P

              LEFT_RIGHT_RESOLUTION = dai.MonoCameraProperties.SensorResolution.THE_800_P

              toFSize = (640, 480)

              rgbSize = (1280, 800)

              rgbWeight = 0.4

              depthWeight = 0.6

              def updateBlendWeights(percent_rgb):

              global depthWeight
              
              global rgbWeight
              
              rgbWeight = float(percent_rgb) / 100.0
              
              depthWeight = 1.0 - rgbWeight

              if name == 'main':

              pipeline = create_pipeline(ALIGN_SOCKET)
              
              rgb_depth_window_name = "rgb-depth"
              
              with dai.Device(pipeline) as device:
              
                  cv2.namedWindow(rgb_depth_window_name)
              
                  cv2.createTrackbar(
              
                      "RGB Weight %",
              
                      rgb_depth_window_name,
              
                      int(rgbWeight \* 100),
              
                      100,
              
                      updateBlendWeights,
              
                  )
              
                  # Create output queue
              
                  q_sync = device.getOutputQueue(name="sync_out", maxSize=4, blocking=False)
              
                  # Only retrieve calibration data if essential
              
                  try:
              
                      M1, D1, M2, D2, T, R, TARGET_MATRIX = get_calib(RGB_SOCKET, ALIGN_SOCKET, toFSize, rgbSize)
              
                  except:
              
                      raise
              
                  # Create timestamped directory
              
                  timestamp = time.strftime("%Y%m%d_%H%M%S")
              
                  save_dir = os.path.join("data", timestamp)
              
                  os.makedirs(save_dir, exist_ok=True)
              
                  frame_counter = 0
              
                  while True:
              
                      msgGrp = q_sync.get()
              
                      frames = {}
              
                      for name, msg in msgGrp:
              
                          frames[name] = msg.getCvFrame()
              
                      if len(frames) == 5:
              
                          # Process frames for display and save only when necessary
              
                          depth_tof_frame = frames['depth_tof']
              
                          depth_stereo_frame = frames['depth_stereo']
              
                          left_frame = frames['left_img']
              
                          right_frame = frames['right_img']
              
                          rgb_frame = frames['rgb_img']
              
                          # Save frames - comment out if saving is not needed to reduce lag
              
                          # depth_tof_filename = os.path.join(save_dir, f"depth_tof_{frame_counter:06d}.npy")
              
                          # depth_stereo_filename = os.path.join(save_dir, f"depth_stereo_{frame_counter:06d}.npy")
              
                          # left_filename = os.path.join(save_dir, f"left_img_{frame_counter:06d}.png")
              
                          # right_filename = os.path.join(save_dir, f"right_img_{frame_counter:06d}.png")
              
                          # rgb_filename = os.path.join(save_dir, f"rgb_img_{frame_counter:06d}.png")
              
                          # np.save(depth_tof_filename, depth_tof_frame)
              
                          # np.save(depth_stereo_filename, depth_stereo_frame)
              
                          # cv2.imwrite(left_filename, left_frame)
              
                          # cv2.imwrite(right_filename, right_frame)
              
                          # cv2.imwrite(rgb_filename, rgb_frame)
              
                          # Optional: Commenting out displays to reduce lag
              
                          # depth_tof_display = cv2.normalize(depth_tof_frame, None, 0, 255, cv2.NORM_MINMAX)
              
                          # depth_tof_display = np.uint8(depth_tof_display)
              
                          # depth_tof_display = cv2.applyColorMap(depth_tof_display, cv2.COLORMAP_JET)
              
                          # cv2.imshow("Depth ToF", depth_tof_display)
              
                          # depth_stereo_display = cv2.normalize(depth_stereo_frame, None, 0, 255, cv2.NORM_MINMAX)
              
                          # depth_stereo_display = np.uint8(depth_stereo_display)
              
                          # depth_stereo_display = cv2.applyColorMap(depth_stereo_display, cv2.COLORMAP_JET)
              
                          # cv2.imshow("Depth Stereo", depth_stereo_display)
              
                          # cv2.imshow("Left Image", left_frame)
              
                          # cv2.imshow("Right Image", right_frame)
              
                          # cv2.imshow("RGB Image", rgb_frame)
              
                          # Optional: Align depth with RGB frame (comment out if alignment is not essential)
              
                          alignedDepth = getAlignedDepth(depth_tof_frame, M1, D1, M2, D2, T, R, TARGET_MATRIX, toFSize, rgbSize)
              
                          alignedDepthColorized = colorizeDepth(alignedDepth)
              
                          # Blending RGB and Depth if visualization is needed
              
                          blended = cv2.addWeighted(rgb_frame, rgbWeight, alignedDepthColorized, depthWeight, 0)
              
                          cv2.imshow(rgb_depth_window_name, blended)
              
                          # Exit condition
              
                          if cv2.waitKey(1) == ord('q'):
              
                              break
              
                      frame_counter += 1  # Update frame count if saving or referencing frames
              
                  device.close()
              
                  print('Data collection complete.')
                7 days later

                jakaskerl
                Any update on how to fix this? Or have you and your team come up with a different solution? I have been unable to make any progress on my project due to the misalignment issue still.

                  Hi gdeanrexroth
                  But the first script is sent worked for me so whatever you changed in the script caused the issue.
                  The first script from jakaskerl worked right?

                  Thanks,
                  Jaka

                    jakaskerl
                    Yes, the original script works. Yet I was trying to reduce the amount of lag that I was experiencing was due to the multiple streams captured. I only displayed two streams with the 2nd script that I gave you. With the feed from the camera slowed down due the amount work it is doing, how are we able to truly know that the script provided is actually aligned? The first script seems aligned, however when I take the depth images that are generated from the script. I was unable to generate a successful point cloud with any of them. I added the proper intrinsic and extrinsic values to the script that generate the point cloud. Still no luck.

                      gdeanrexroth
                      You can disable some streams as you only need the ones used in calculation. That should speed it up. If frames are aligned when device is still, then the intrinsics/extrinsics are correct.

                      If pointcloud is the problem and the rest of the pipeline works, it's probably just pointcloud intrinsics that are wrong. You have to make sure you use the same ones as the script uses (so the ones from the eeprom, not your calculated values).

                      Thanks,
                      Jaka