jakaskerl
Thank you for your help. Now that I am working with point clouds and calibration has helped and now that the values are at the right value. I have created my own script that uses the tof and color camera node, I am using tof to capture depth and using the color camera to display the pcd in color. Here are some of the things I have in my code (everything is order that it is in my code)

# Create ToF node

**tof = pipeline.create(dai.node.ToF)**

**tofConfig = tof.initialConfig.get()**

**tofConfig.enableOpticalCorrection = True**

**tofConfig.enablePhaseShuffleTemporalFilter = True**

**tofConfig.phaseUnwrappingLevel = 5**

**tofConfig.phaseUnwrapErrorThreshold = 300**

**tof.initialConfig.set(tofConfig)**

# Camera intrinsic parameters (ensure I am using the correct calibration values)

fx = 494.35192765  # Update with my calibrated value

fy = 499.48351759  # Update with my calibrated value

cx = 321.84779556  # Update with my calibrated value

cy = 218.30442303  # Update with my calibrated value

**intrinsic = o3d.camera.PinholeCameraIntrinsic(width=640, height=480, fx=fx, fy=fy, cx=cx, cy=cy)
**
I am using this functionality:
# Convert depth image to Open3D format

            **depth_o3d = o3d.geometry.Image(depth_map)**

            **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**

            **# Generate and save colored point cloud**

            **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**

                **color_o3d, depth_o3d, depth_scale=1000.0, depth_trunc=3.0, convert_rgb_to_intensity=False**

            **)**

            **color_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, intrinsic)**

            **color_pcd_filename = os.path.join(output_directory, f'color_pcd_{capture_count}.pcd')**

            **o3d.io.write_point_cloud(color_pcd_filename, color_pcd)**

I capture the images and save it to a file path. Then I run a python script that uses open3d
It load the point cloud from that file path and visualize it. My question still is the sparseness, have tried down sampling, icp and global registration, changing the voxel size and much more. But i am still getting unnecessary noise when i am viewing the pcd. Yes it look better with calibration but I am a tad lost on why it is still doing this:

Anything before the red should not be there. It seems to displaying the camera position and just distributing the points. Does the FOV also has an effect on this? With the second pic, anything after the red line should not be there. The points are trying to piece everything together but seem to be having an issue with it.


    jakaskerl
    Baseline for the camera is 20mm and the ToF is 20cm - 5m. How does this affect the AOC and FOV? Does the FOV and AOC affect point cloud and pcds distribution? If i could implement them within the code that captures the pcd, would that potentially help?

    gdeanrexroth
    I suggest you modify the tof config according to the docs. The phase unwrapping level for example, seems a bit high and will introduce a bunch of unnecessary noise.

    Can't say for the host side o3d drawing; could you use the same approach as this example:
    luxonis/depthai-experimentsblob/master/gen2-pointcloud/rgbd-pointcloud/main.py

    or create pointcloud on device and use:
    https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/

    Thanks,
    Jaka

      jakaskerl
      1. I have tried to changing the value for it based off the distance between the camera and the object. I am seeing what gives me the best result.
      2. Main.py method/usage of open3d is a good reference, but the script i have to visuzlaize the pcd is this
      import open3d as o3d
      # Path to the .pcd file
      pcd_file_path = r'S:gcolor_pcd_1.pcd'# Load the point cloudpcd = o3d.io.read_point_cloud(pcd_file_path)
      # Visualize the point cloud
      o3d.visualization.draw_geometries([pcd], window_name="ToF Point Cloud")
      3. The on device point cloud method from that link does not work. I believe you have sent me a link to some an updated version of the script and it worked. I have moidified the script some previous times to test but the error of this:
      inMessage = q.get()
      Always pop up in my terminal.

      Could it possibly be the way I am utilizing open3d? I have tried methods for visualizatio , which stitches two point cloud into one. But I am still left left with unnecessary noise. Adjusting the depthscale and depth_trunc does somewhat changes the output of my pcd whenever i visualize it. The snippet is apart of my code. I am using open3d here to convert the depth from the tof sensor and color camera. Currently the distance from the object to the camera is roughly 170 cm apart. For testing purposes I specifically want to only capture that distance, but I am still getting some type of noise at the type view and unnecessary noise from the side views:
      # Convert depth image to Open3D format

                  **depth_o3d = o3d.geometry.Image(depth_map)**
      
                  **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**
      
                  **# Generate and save colored point cloud**
      
                  **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**
      
                      **color_o3d, depth_o3d, depth_scale=1700.0, depth_trunc=3.7, convert_rgb_to_intensity=False**
      
                  **)**

      jakaskerl
      The pcd that I captured using the tof sensor and color camera was tested in another way. I placed the captured pcd into a online website that converts it to a point cloud. I noticed that it displayed the same as my code.
      I will look at the tof configuration again along with the point cloud code. I am somewhat close to figuring this out but I am still lost in some areas. Here is the screenshot of the website pcd.

      The json below is the saved depth information of the captured pcd from my script.
      "
      {

      **"class_name" : "PinholeCameraParameters",**
      
      **"extrinsic" :** 
      
      **[**
      
      	**0.97137093405355701,**
      
      	**0.080079400112124194,**
      
      	**0.22366447673603115,**
      
      	**0.0,**
      
      	**-0.17295492005240018,**
      
      	**0.88380774499819115,**
      
      	**0.43470733316897259,**
      
      	**0.0,**
      
      	**-0.16286529435575947,**
      
      	**-0.46094593995271788,**
      
      	**0.87235528102689741,**
      
      	**0.0,**
      
      	**-0.019386541654994344,**
      
      	**-0.24191574413614514,**
      
      	**1.0362066902315488,**
      
      	**1.0**
      
      **],**
      
      **"intrinsic" :** 
      
      **{**
      
      	**"height" : 1009,**
      
      	**"intrinsic_matrix" :** 
      
      	**[**
      
      		**873.8196324184986,**
      
      		**0.0,**
      
      		**0.0,**
      
      		**0.0,**
      
      		**873.8196324184986,**
      
      		**0.0,**
      
      		**959.5,**
      
      		**504.0,**
      
      		**1.0**
      
      	**],**
      
      	**"width" : 1920**
      
      **},**
      
      **"version_major" : 1,**
      
      **"version_minor" : 0**

      }
      "

      jakaskerl
      the object is 167 cm away from the front of the camera. Do i apply to this to the phaseunwrappinglevel? Would mine be set to 0 since my distance is less than 1.87 meters? I have tried that but it still doesn't remove the unnecessary noise. I have looked at most of your the links you have recommended and modified my code as so. Calibrating the camera, getting both intrinsic and extrinsic values helped out a lot. Also setting the tof configuration values based off luxonis documentations. Simple modifications based off documentation has helped out a lot. But the only issue that I am steady running into is the unnecessary noise.

      Could my background(the reflective ceiling lights, the glossy floor and cabinets, etc)?

        gdeanrexroth
        External lights could contribute to the noise. From our tests (let me quote): more light makes more outliers and also it make bigger variance of the error.

        What do you mean by unnecessary noise? How much noise are we talking about. I'm afraid you are reaching HW limits. The image you sent looks pretty good.

        Thanks,
        Jaka

          jakaskerl
          Can you explain the HW limits you are referring to?

          It may not be unnecessary noise, but what I am trying to express is more so the formality of the objects at hand. Here are examples of what I mean. I am providing the tof depth and rgb window to show you depth and raw color image. The provide screenshot shows the side of the point cloud. I have used the global registration method to combine two point clouds into one. It does look good as you have said. I agree with that. I wondering why the sides are struggling to form.

            jakaskerl
            The issue I am coming in with is that with both filtration and just displaying the point cloud as it is. I am still experiencing lingering point clouds. Versus getting the pcd to look like this. So again the point cloud that I have is cleaned. Yet I am trying to get it to form like the example below. I took this example from the link:https://www.open3d.org/docs/latest/tutorial/Advanced/pointcloud_outlier_removal.html
            Also i have applied global, icp registration to the pcd and the outcome has been nearly the same as it without the filters. Could this be a calculation issue or maybe something that I am not directly doing right?

            gdeanrexroth

            gdeanrexroth I have 287685 points within the pcd file. Is this too much to capture at one time?

            As long as you PC is capable enough, no.

            gdeanrexroth I have used the global registration method to combine two point clouds into one.

            The pointclouds don't look fully aligned.. I know you don't need alignment to perform global registration but this seems off.
            How do stereo and tof images look like? I would suspect that if the perspectives are too different, the registration won't work.

            Thanks,
            Jaka

              jakaskerl Yes the points are not aligning. I have tried both icp and global registration. I will try to find a way to give you both stereo and tof. I have already given you the tof in the above image. The registration process works, it does remove the outlier points and remove back points. My last goal is simply having it combine into one, and having the objects points aligned together as one.