jakaskerl **Update**
I will check my OAK-D-SR-POE json file. I will edit this to confirm what i see.
Below is my previous OAK-SR-POE.json. I updated my json to match yours, the camera now cuts on and it doesn't throw an error. However I do not have a physical board. If I do use my computer screen to display the board, how would that impact my calibration? Again one of my main goals is to use the camera to capture/generate point cloud data. Displaying the clearest pcd as a possible.

Update- I have ran the script as prompted, I pressed the space bar to start capturing the images, but got this as a response. The board is displayed on my computer monitor, it identify the corners as expected but can't recognized the board. I am using the 24inch 13x7 board with this command{python calibrate.py -db charuco_24inch_13x7 -nx 13 -ny 7 -c 1 -cd 0 -s 4 -ms 3 -brd OAK-D-SR-POE} initiating the code. Can i print out the board and try calibration?
"py: Saved image as: dataset\right\p0_0.png

Status of right is True

Time stamp of tof is 2 days, 20:36:45.198781

Markers count ... 0

Total markers needed -> 18

Status of tof is False

py: Capture failed, unable to find chessboard! Fix position and press spacebar again"

{

"board_config":

{

"name": "OAK-D-SR-POE",

"revision": "R0M0E0",

"cameras":{

"CAM_B": {

"name": "left",

"hfov": 71.86,

"type": "tof",

"extrinsics": {

"to_cam": "CAM_C",

"specTranslation": {

"x": -2.0,

"y": 0,

"z": 0

},

"rotation":{

"r": 0,

"p": 0,

"y": 0

}

}

},

"CAM_C": {

"name": "right",

"hfov": 71.86,

"type": "color",

"extrinsics": {

"to_cam": "CAM_A",

"specTranslation": {

"x": -1.7382,

"y": 0,

"z": 0

},

"rotation":{

"r": 0,

"p": 0,

"y": 0

}

}

},

"CAM_A": {

"name": "rgb",

"hfov": 71.86,

"type": "tof"

}

},

"stereo_config":{

"left_cam": "CAM_B",

"right_cam": "CAM_C"

}

}

}

I am testing it out right now. My camera is capturing 13 images instead of 39. Is that okay?
From the 13 images, it captured all 13 and did exactly the example video did. Here are my results:
Using dataset path: dataset

Starting image processing

<------------Calibrating left ------------>

INTRINSIC CALIBRATION

Reprojection error of left: 0.8412142644266325

<------------Calibrating right ------------>

INTRINSIC CALIBRATION

Reprojection error of right: 0.8775394613415113

<------------Calibrating tof ------------>

INTRINSIC CALIBRATION

Reprojection error of tof: 0.5492520139036353

<-------------Extrinsics calibration of left and right ------------>

Reprojection error is 0.8740851772773527

<-------------Epipolar error of left and right ------------>

Original intrinsics ....

L [[842.68376755 0. 673.13407279]

[ 0. 851.86743982 412.48182057]

[ 0. 0. 1. ]]

R: [[836.24616608 0. 656.428313 ]

[ 0. 845.62659357 439.05911096]

[ 0. 0. 1. ]]

Intrinsics from the getOptimalNewCameraMatrix/Original ....

L: [[836.24616608 0. 656.428313 ]

[ 0. 845.62659357 439.05911096]

[ 0. 0. 1. ]]

R: [[836.24616608 0. 656.428313 ]

[ 0. 845.62659357 439.05911096]

[ 0. 0. 1. ]]

Average Epipolar Error is : 0.20598603925134382

Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.

<-------------Extrinsics calibration of right and tof ------------>

Reprojection error is 1.1929608204958633

<-------------Epipolar error of right and tof ------------>

Original intrinsics ....

L [[418.12308304 0. 328.2141565 ]

[ 0. 422.81329678 219.52955548]

[ 0. 0. 1. ]]

R: [[494.35192765 0. 321.84779556]

[ 0. 499.48351759 218.30442303]

[ 0. 0. 1. ]]

Intrinsics from the getOptimalNewCameraMatrix/Original ....

L: [[494.35192765 0. 321.84779556]

[ 0. 499.48351759 218.30442303]

[ 0. 0. 1. ]]

R: [[494.35192765 0. 321.84779556]

[ 0. 499.48351759 218.30442303]

S:test_run\depthai\calibrate.py:1066: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

calibration_handler.setDistortionCoefficients(stringToCam[camera], cam_info['dist_coeff'])

S:test_run\depthai\calibrate.py:1105: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

calibration_handler.setCameraExtrinsics(stringToCam[camera], stringToCam[cam_info['extrinsics']['to_cam']], cam_info['extrinsics']['rotation_matrix'], cam_info['extrinsics']['translation'], specTranslation)Reprojection error threshold -> 1.1111111111111112

right Reprojection Error: 0.877539

Reprojection error threshold -> 1.0

tof Reprojection Error: 0.549252

Flashing Calibration data into

EEPROM VERSION being flashed is -> 7

EEPROM VERSION being flashed is -> 7

This screen was prompted at the end

I printed the board out and stamped it a flat surface, as of right now I would assume my question about the affect of having a printed version of the board works. Correct me if I wrong.

If the steps that I completed are correct then I am able to move forward with point cloud configurations, correct? Now that I have the camera calibrated, I can now add the correct extrinsic values to my code?

    jakaskerl
    Thank you for your help. Now that I am working with point clouds and calibration has helped and now that the values are at the right value. I have created my own script that uses the tof and color camera node, I am using tof to capture depth and using the color camera to display the pcd in color. Here are some of the things I have in my code (everything is order that it is in my code)

    # Create ToF node

    **tof = pipeline.create(dai.node.ToF)**
    
    **tofConfig = tof.initialConfig.get()**
    
    **tofConfig.enableOpticalCorrection = True**
    
    **tofConfig.enablePhaseShuffleTemporalFilter = True**
    
    **tofConfig.phaseUnwrappingLevel = 5**
    
    **tofConfig.phaseUnwrapErrorThreshold = 300**
    
    **tof.initialConfig.set(tofConfig)**

    # Camera intrinsic parameters (ensure I am using the correct calibration values)

    fx = 494.35192765  # Update with my calibrated value

    fy = 499.48351759  # Update with my calibrated value

    cx = 321.84779556  # Update with my calibrated value

    cy = 218.30442303  # Update with my calibrated value

    **intrinsic = o3d.camera.PinholeCameraIntrinsic(width=640, height=480, fx=fx, fy=fy, cx=cx, cy=cy)
    **
    I am using this functionality:
    # Convert depth image to Open3D format

                **depth_o3d = o3d.geometry.Image(depth_map)**
    
                **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**
    
                **# Generate and save colored point cloud**
    
                **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**
    
                    **color_o3d, depth_o3d, depth_scale=1000.0, depth_trunc=3.0, convert_rgb_to_intensity=False**
    
                **)**
    
                **color_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, intrinsic)**
    
                **color_pcd_filename = os.path.join(output_directory, f'color_pcd_{capture_count}.pcd')**
    
                **o3d.io.write_point_cloud(color_pcd_filename, color_pcd)**

    I capture the images and save it to a file path. Then I run a python script that uses open3d
    It load the point cloud from that file path and visualize it. My question still is the sparseness, have tried down sampling, icp and global registration, changing the voxel size and much more. But i am still getting unnecessary noise when i am viewing the pcd. Yes it look better with calibration but I am a tad lost on why it is still doing this:

    Anything before the red should not be there. It seems to displaying the camera position and just distributing the points. Does the FOV also has an effect on this? With the second pic, anything after the red line should not be there. The points are trying to piece everything together but seem to be having an issue with it.


      jakaskerl
      Baseline for the camera is 20mm and the ToF is 20cm - 5m. How does this affect the AOC and FOV? Does the FOV and AOC affect point cloud and pcds distribution? If i could implement them within the code that captures the pcd, would that potentially help?

      gdeanrexroth
      I suggest you modify the tof config according to the docs. The phase unwrapping level for example, seems a bit high and will introduce a bunch of unnecessary noise.

      Can't say for the host side o3d drawing; could you use the same approach as this example:
      luxonis/depthai-experimentsblob/master/gen2-pointcloud/rgbd-pointcloud/main.py

      or create pointcloud on device and use:
      https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/

      Thanks,
      Jaka

        jakaskerl
        1. I have tried to changing the value for it based off the distance between the camera and the object. I am seeing what gives me the best result.
        2. Main.py method/usage of open3d is a good reference, but the script i have to visuzlaize the pcd is this
        import open3d as o3d
        # Path to the .pcd file
        pcd_file_path = r'S:gcolor_pcd_1.pcd'# Load the point cloudpcd = o3d.io.read_point_cloud(pcd_file_path)
        # Visualize the point cloud
        o3d.visualization.draw_geometries([pcd], window_name="ToF Point Cloud")
        3. The on device point cloud method from that link does not work. I believe you have sent me a link to some an updated version of the script and it worked. I have moidified the script some previous times to test but the error of this:
        inMessage = q.get()
        Always pop up in my terminal.

        Could it possibly be the way I am utilizing open3d? I have tried methods for visualizatio , which stitches two point cloud into one. But I am still left left with unnecessary noise. Adjusting the depthscale and depth_trunc does somewhat changes the output of my pcd whenever i visualize it. The snippet is apart of my code. I am using open3d here to convert the depth from the tof sensor and color camera. Currently the distance from the object to the camera is roughly 170 cm apart. For testing purposes I specifically want to only capture that distance, but I am still getting some type of noise at the type view and unnecessary noise from the side views:
        # Convert depth image to Open3D format

                    **depth_o3d = o3d.geometry.Image(depth_map)**
        
                    **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**
        
                    **# Generate and save colored point cloud**
        
                    **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**
        
                        **color_o3d, depth_o3d, depth_scale=1700.0, depth_trunc=3.7, convert_rgb_to_intensity=False**
        
                    **)**

        jakaskerl
        The pcd that I captured using the tof sensor and color camera was tested in another way. I placed the captured pcd into a online website that converts it to a point cloud. I noticed that it displayed the same as my code.
        I will look at the tof configuration again along with the point cloud code. I am somewhat close to figuring this out but I am still lost in some areas. Here is the screenshot of the website pcd.

        The json below is the saved depth information of the captured pcd from my script.
        "
        {

        **"class_name" : "PinholeCameraParameters",**
        
        **"extrinsic" :** 
        
        **[**
        
        	**0.97137093405355701,**
        
        	**0.080079400112124194,**
        
        	**0.22366447673603115,**
        
        	**0.0,**
        
        	**-0.17295492005240018,**
        
        	**0.88380774499819115,**
        
        	**0.43470733316897259,**
        
        	**0.0,**
        
        	**-0.16286529435575947,**
        
        	**-0.46094593995271788,**
        
        	**0.87235528102689741,**
        
        	**0.0,**
        
        	**-0.019386541654994344,**
        
        	**-0.24191574413614514,**
        
        	**1.0362066902315488,**
        
        	**1.0**
        
        **],**
        
        **"intrinsic" :** 
        
        **{**
        
        	**"height" : 1009,**
        
        	**"intrinsic_matrix" :** 
        
        	**[**
        
        		**873.8196324184986,**
        
        		**0.0,**
        
        		**0.0,**
        
        		**0.0,**
        
        		**873.8196324184986,**
        
        		**0.0,**
        
        		**959.5,**
        
        		**504.0,**
        
        		**1.0**
        
        	**],**
        
        	**"width" : 1920**
        
        **},**
        
        **"version_major" : 1,**
        
        **"version_minor" : 0**

        }
        "

        jakaskerl
        the object is 167 cm away from the front of the camera. Do i apply to this to the phaseunwrappinglevel? Would mine be set to 0 since my distance is less than 1.87 meters? I have tried that but it still doesn't remove the unnecessary noise. I have looked at most of your the links you have recommended and modified my code as so. Calibrating the camera, getting both intrinsic and extrinsic values helped out a lot. Also setting the tof configuration values based off luxonis documentations. Simple modifications based off documentation has helped out a lot. But the only issue that I am steady running into is the unnecessary noise.

        Could my background(the reflective ceiling lights, the glossy floor and cabinets, etc)?

          gdeanrexroth
          External lights could contribute to the noise. From our tests (let me quote): more light makes more outliers and also it make bigger variance of the error.

          What do you mean by unnecessary noise? How much noise are we talking about. I'm afraid you are reaching HW limits. The image you sent looks pretty good.

          Thanks,
          Jaka

            jakaskerl
            Can you explain the HW limits you are referring to?

            It may not be unnecessary noise, but what I am trying to express is more so the formality of the objects at hand. Here are examples of what I mean. I am providing the tof depth and rgb window to show you depth and raw color image. The provide screenshot shows the side of the point cloud. I have used the global registration method to combine two point clouds into one. It does look good as you have said. I agree with that. I wondering why the sides are struggling to form.

              jakaskerl
              The issue I am coming in with is that with both filtration and just displaying the point cloud as it is. I am still experiencing lingering point clouds. Versus getting the pcd to look like this. So again the point cloud that I have is cleaned. Yet I am trying to get it to form like the example below. I took this example from the link:https://www.open3d.org/docs/latest/tutorial/Advanced/pointcloud_outlier_removal.html
              Also i have applied global, icp registration to the pcd and the outcome has been nearly the same as it without the filters. Could this be a calculation issue or maybe something that I am not directly doing right?

              gdeanrexroth

              gdeanrexroth I have 287685 points within the pcd file. Is this too much to capture at one time?

              As long as you PC is capable enough, no.

              gdeanrexroth I have used the global registration method to combine two point clouds into one.

              The pointclouds don't look fully aligned.. I know you don't need alignment to perform global registration but this seems off.
              How do stereo and tof images look like? I would suspect that if the perspectives are too different, the registration won't work.

              Thanks,
              Jaka

                jakaskerl Yes the points are not aligning. I have tried both icp and global registration. I will try to find a way to give you both stereo and tof. I have already given you the tof in the above image. The registration process works, it does remove the outlier points and remove back points. My last goal is simply having it combine into one, and having the objects points aligned together as one.