I believe to mistaken calibrating my camera with another process. I have the oak-d-sr-poe camera, I have clone luxnonis/depthai/calibrate.py repo. I now I am trying to use the calibrate the camera correctly based off this link: Calibration (luxonis.com)

So far I have ran the script and have came upon this:
PS S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai> python calibrate.py -s 3.0 -ms 3.5 -brd OAK-D-SR-POE -m process

Cam: rgb and focus: False

Cam: left and focus: False

Cam: right and focus: False

Traceback (most recent call last):

File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 1183, in <module>

Main().run()

^^^^^^

File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 453, in init

self.charuco_board = cv2.aruco.CharucoBoard_create(

                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

cv2.error: OpenCV(4.5.5) D:\a\opencv-python\opencv-python\opencv_contrib\modules\aruco\src\charuco.cpp:126: error: (-215:Assertion failed) squaresX > 1 && squaresY > 1 && markerLength > 0 && squareLength > markerLength in function 'cv::aruco::CharucoBoard::create'

The camera does lit up green but then it crashes and displays the message above. Any suggestions? I also believe that this could be the issue for some of my point cloud scripts being sparse. Due to an uncalibrated camera. I thought I did it correctly the first time however I mistaken it for another process.

    gdeanrexroth
    Update- I came across this link:https://luxonis-depthai-hardware.readthedocs-hosted.com/en/latest/pages/guides/calibration/#tof-calibration
    I followed the git command line"git checkout new_tof_calib", the terminal then outputed this "warning: unable to rmdir 'depthai_calibration': Directory not empty

    Updating files: 100% (316/316), done.

    M resources/depthai_boards

    branch 'new_tof_calib' set up to track 'origin/new_tof_calib'."

    i ran the calibrate.py file and got a new output within terminal. Here is the result:
    "Starting image processing

    Starting image processingxxccx

    11

    squareX is 11

    <------------Calibrating left ------------>

    Device closed in exception..

    ERROR: Images not read correctly, check directory

    Traceback (most recent call last):

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 968, in calibrate

    status, result_config = stereo_calib.calibrate(
    
                            ^^^^^^^^^^^^^^^^^^^^^^^

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\depthai_helpers\calibration_utils.py", line 121, in calibrate

    ret, intrinsics, dist_coeff, _, _, size = self.calibrate_intrinsics(
    
                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\depthai_helpers\calibration_utils.py", line 258, in calibrate_intrinsics

    assert len(
    
           ^^^^

    AssertionError: ERROR: Images not read correctly, check directory"
    Beside switching to the new branch there is nothing that has changed within the code.

    Hi @gdeanrexroth

    git checkout new_tof_calib
    git pull --recurse-submodules

    LMK if this is enough to solve the issues.

    Thanks,
    Jaka

    11 days later

    Brandon
    After running the commands that you mentioned, The traceback changed:

    python calibrate.py -db -nx 13 -ny 7 -c 1 -cd 0 -s 4 -ms 3 -brd OAK-D-SR-POE(command i entered in terminal)

    [{socket: CAM_A, sensorName: S5K33D, width: 640, height: 480, orientation: ROTATE_180_DEG, supportedTypes: [TOF], hasAutofocus: 0, hasAutofocusIC: 0, name: tof}, {socket: CAM_B, sensorName: OV9782, width: 1280, height: 800, orientation: ROTATE_180_DEG, supportedTypes: [COLOR], hasAutofocus: 0, hasAutofocusIC: 0, name: left}, {socket: CAM_C, sensorName: OV9782, width: 1280, height: 800, orientation: ROTATE_180_DEG, supportedTypes: [COLOR], hasAutofocus: 0, hasAutofocusIC: 0, name: right}]

    Cam: rgb and focus: False

    Cam: left and focus: False

    Cam: right and focus: False

    Camera type ToF

    Traceback (most recent call last):

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 1207, in <module>

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 333, in init

    **pipeline = self.create_pipeline()**
    
               **^^^^^^^^^^^^^^^^^^^^^^**

    File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 445, in create_pipeline

    **tof_config.depthParams.freqModUsed = dai.RawToFConfig.DepthParams.TypeFMod.MIN**
    
                                         **^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

    AttributeError: type object 'depthai.RawToFConfig' has no attribute 'DepthParams'

      gdeanrexroth
      However when I commented out line 445, and ran the same command line. The interface to capture the images pops up on the screen. But the results are different:
      Cam: rgb and focus: False

      Cam: left and focus: False

      Cam: right and focus: False

      Camera type ToF

      Starting image capture. Press the [ESC] key to abort.

      Will take 13 total images, 1 per each polygon.

      left 0:01:25.766316

      new minimum: {'ts': 0.0, 'indicies': {'left': 0}} min required: 0.2

      synced frames: None

      right 0:01:25.766330

      new minimum: {'ts': 1.3999999993075107e-05, 'indicies': {'left': 0, 'right': 0}} min required: 0.2

      synced frames: None

      rgb 0:01:25.667397

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      synced frames: None

      left 0:01:25.866318

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      synced frames: None

      right 0:01:25.866340

      rgb 0:01:25.767397

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      left 0:01:25.966316

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      right 0:01:25.966328

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      rgb 0:01:25.867395

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      rgb 0:01:25.967395

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      right 0:01:26.066337

      left 0:01:26.066318

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      left 0:01:26.166316

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      right 0:01:26.166329

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      synced frames: None

      rgb 0:01:26.067393

      new minimum: {'ts': 0.09893300000000238, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

      new minimum: {'ts': 0.0010949999999922966, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

      Returning synced messages with error: 0.0010949999999922966 {'left': 0, 'right': 0, 'rgb': 1}

      synced frames: {'left': <depthai.ImgFrame object at 0x0000019B3839E830>, 'right': <depthai.ImgFrame object at 0x0000019B59F61530>, 'rgb': <depthai.ImgFrame object at 0x0000019B59F03BF0>}

      Timestamp of left is 0:01:25.766316

      Timestamp of right is 0:01:25.766330

      Timestamp of rgb is 0:01:25.767397

      Traceback (most recent call last):

      File "S:\test_run\depthai\calibrate.py", line 1207, in <module>

      **Main().run()**

      File "S:\test_run\depthai\calibrate.py", line 1196, in run

      **self.capture_images_sync()**

      File "S:\DEPT\SVM4\Shared\Crossfunctional_Work\Projects\DepthCameras\LuxonisDepthAI\test_run\depthai\calibrate.py", line 590, in capture_images_sync

      **gray_frame = cv2.cvtColor(frameMsg.getCvFrame(), cv2.COLOR_BGR2GRAY)**
      
                   **^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**

      cv2.error: OpenCV(4.5.5) d:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<3,4,-1>,struct cv::impl::A0xa96199bf::Set<1,-1,-1>,struct cv::impl::A0xa96199bf::Set<0,2,5>,2>::CvtHelper(const class cv::InputArray &,const class cv::OutputArray &,int)'

      > Invalid number of channels in input image:

      > 'VScn::contains(scn)'

      > where

      > 'scn' is 1

        gdeanrexroth
        Update: Disregard the above comments. I followed the git submodule command that you mentioned. It created the "new_tof_calib" branch on my local computer. As i was going through the code and debugging it. Most of the methods and functions are compatible with different python libraries that are not installed on my computer. After going through various documentation, I changed my conda environment based by the updated script. Now that it is updated, I was able to run the code and display the interface as coded to do. But whenever I click the space bar to capture the images, it prompt this"python calibrate.py -db charuco_24inch_13x7 -nx 13 -ny 7 -c 1 -cd 0 -s 4 -ms 3 -brd OAK-D-SR-POE

        Cam: rgb and focus: False

        Cam: left and focus: False

        Cam: right and focus: False

        Saving dataset to: dataset

        Sensor name for left is OV9782

        Sensor name for right is OV9782

        Camera type ToF

        Starting image capture. Press the [ESC] key to abort.

        Will take 13 total images.

        Start capturing...

        new minimum: {'ts': 0.09893999999999892, 'indicies': {'left': 0, 'right': 0, 'rgb': 0}} min required: 0.2

        new minimum: {'ts': 0.0010839999999987526, 'indicies': {'left': 0, 'right': 0, 'rgb': 1}} min required: 0.2

        Time stamp of left is 3:23:16.393060

        Markers count ... 39

        Total markers needed -> 18

        py: Saved image as: dataset\left\p0_0.png

        Status of left is True

        Time stamp of right is 3:23:16.393072

        Markers count ... 25

        Total markers needed -> 18

        py: Saved image as: dataset\right\p0_0.png

        Status of right is True

        Time stamp of rgb is 3:23:16.394132

        Markers count ... 0

        Total markers needed -> 18

        Status of rgb is False

        py: Capture failed, unable to find chessboard! Fix position and press spacebar again"

          jakaskerl
          With previous documentation, users have had issues with this error:
          import depthai_calibration.calibration_utils as calibUtils

          ModuleNotFoundError: No module named 'depthai_calibration.calibration_utils'

          **I am having the same error, on one post you recommended this "**Change .git/config under a given submodule from ssh path to https path:

          $$
          active = true
          - url = git@github.com:luxonis/depthai-calibration.git
          + url = luxonis/depthai-calibration.git
          $$

          **"
          I have tried this method but it prompted me this message:fatal: no submodule mapping found in .gitmodules for path 'depthai-calibration'

          To add onto this, i have changed the submodule url to test if it would do anything different. I then ran the command "git submodule update --init --recursive" and it resulted in "fatal: No url found for submodule path 'resources/depthai-boards' in .gitmodules". My main error is the module not being found.**

            gdeanrexroth
            Go to main branch, init the submodules, then switch to tof_calib. Should work then. Just tried by purging the whole clone.

            Thanks,
            Jaka

              jakaskerl
              Before you made your comment I have tried to update my submodule through the depthai_calibration. then i ran the command "python calibrate.py -db 24_charuco13x7 -nx 13 -ny 7 -c 1 -cd 0 -s 4 -ms 3 -brd OAK-D-SR-POE". But my results are below. I am on the tof_calib branch. The interface which displays the info popped up and it allowed me to click space. But the errors prompted at the bottom. Is there a loop within the code that could be causing this?

              Cam: rgb and focus: False

              Cam: left and focus: False

              Cam: right and focus: False

              Saving dataset to: dataset

              Camera type ToF

              Sensor name for right is OV9782

              Camera type ToF

              Starting image capture. Press the [ESC] key to abort.

              Will take 13 total images.

              [14442C10A1C6AECF00] [169.254.1.222] [7.072] [ToF(1)] [error] Unexpected input image size 1280 x 800, maybe connected camera is not ToF?

                gdeanrexroth
                I think you might still be using outdated submodule for boards where there are two tof sensors inside SR-POE.json.
                Correct config. When calibrating the SR POE, make sure to use a physical board, not a monitor. Monitors have IR filters and will not reflect the light - so charuco pattern will not be seen.

                {
                    "board_config":
                    {
                        "name": "OAK-D-SR-POE",
                        "revision": "R0M0E0",
                        "cameras":{
                            "CAM_B": {
                                "name": "left",
                                "hfov": 71.86,
                                "type": "color",
                                "extrinsics": {
                                    "to_cam": "CAM_C",
                                    "specTranslation": {
                                        "x": -2.0,
                                        "y": 0,
                                        "z": 0
                                    },
                                    "rotation":{
                                        "r": 0,
                                        "p": 0,
                                        "y": 0
                                    }
                                }
                            },
                            "CAM_C": {
                                "name": "right",
                                "hfov": 71.86,
                                "type": "color",
                                "extrinsics": {
                                    "to_cam": "CAM_A",
                                    "specTranslation": {
                                        "x": -1.7382,
                                        "y": 0,
                                        "z": 0
                                    },
                                    "rotation":{
                                        "r": 0,
                                        "p": 0,
                                        "y": 0
                                    }
                                }
                            },
                            "CAM_A": {
                                "name": "tof",
                                "hfov": 71.86,
                                "type": "tof"
                            }
                        },
                        "stereo_config":{
                            "left_cam": "CAM_B",
                            "right_cam": "CAM_C"
                        },
                        "imuExtrinsics":
                        {   
                            "sensors":{ 
                                "BNO": {
                                    "name" : "BNO086",
                                    "extrinsics": {
                                        "to_cam": "LEFT",
                                        "specTranslation": {
                                            "x": 0.2915,
                                            "y": -0.069,
                                            "z": 0.2832
                                            },
                                        "rotation":{
                                            "r": 180,
                                            "p": 0,
                                            "y": 270
                                            }
                                    }
                                }
                            }
                        }
                    }
                }

                  jakaskerl **Update**
                  I will check my OAK-D-SR-POE json file. I will edit this to confirm what i see.
                  Below is my previous OAK-SR-POE.json. I updated my json to match yours, the camera now cuts on and it doesn't throw an error. However I do not have a physical board. If I do use my computer screen to display the board, how would that impact my calibration? Again one of my main goals is to use the camera to capture/generate point cloud data. Displaying the clearest pcd as a possible.

                  Update- I have ran the script as prompted, I pressed the space bar to start capturing the images, but got this as a response. The board is displayed on my computer monitor, it identify the corners as expected but can't recognized the board. I am using the 24inch 13x7 board with this command{python calibrate.py -db charuco_24inch_13x7 -nx 13 -ny 7 -c 1 -cd 0 -s 4 -ms 3 -brd OAK-D-SR-POE} initiating the code. Can i print out the board and try calibration?
                  "py: Saved image as: dataset\right\p0_0.png

                  Status of right is True

                  Time stamp of tof is 2 days, 20:36:45.198781

                  Markers count ... 0

                  Total markers needed -> 18

                  Status of tof is False

                  py: Capture failed, unable to find chessboard! Fix position and press spacebar again"

                  {

                  "board_config":

                  {

                  "name": "OAK-D-SR-POE",

                  "revision": "R0M0E0",

                  "cameras":{

                  "CAM_B": {

                  "name": "left",

                  "hfov": 71.86,

                  "type": "tof",

                  "extrinsics": {

                  "to_cam": "CAM_C",

                  "specTranslation": {

                  "x": -2.0,

                  "y": 0,

                  "z": 0

                  },

                  "rotation":{

                  "r": 0,

                  "p": 0,

                  "y": 0

                  }

                  }

                  },

                  "CAM_C": {

                  "name": "right",

                  "hfov": 71.86,

                  "type": "color",

                  "extrinsics": {

                  "to_cam": "CAM_A",

                  "specTranslation": {

                  "x": -1.7382,

                  "y": 0,

                  "z": 0

                  },

                  "rotation":{

                  "r": 0,

                  "p": 0,

                  "y": 0

                  }

                  }

                  },

                  "CAM_A": {

                  "name": "rgb",

                  "hfov": 71.86,

                  "type": "tof"

                  }

                  },

                  "stereo_config":{

                  "left_cam": "CAM_B",

                  "right_cam": "CAM_C"

                  }

                  }

                  }

                  I am testing it out right now. My camera is capturing 13 images instead of 39. Is that okay?
                  From the 13 images, it captured all 13 and did exactly the example video did. Here are my results:
                  Using dataset path: dataset

                  Starting image processing

                  <------------Calibrating left ------------>

                  INTRINSIC CALIBRATION

                  Reprojection error of left: 0.8412142644266325

                  <------------Calibrating right ------------>

                  INTRINSIC CALIBRATION

                  Reprojection error of right: 0.8775394613415113

                  <------------Calibrating tof ------------>

                  INTRINSIC CALIBRATION

                  Reprojection error of tof: 0.5492520139036353

                  <-------------Extrinsics calibration of left and right ------------>

                  Reprojection error is 0.8740851772773527

                  <-------------Epipolar error of left and right ------------>

                  Original intrinsics ....

                  L [[842.68376755 0. 673.13407279]

                  [ 0. 851.86743982 412.48182057]

                  [ 0. 0. 1. ]]

                  R: [[836.24616608 0. 656.428313 ]

                  [ 0. 845.62659357 439.05911096]

                  [ 0. 0. 1. ]]

                  Intrinsics from the getOptimalNewCameraMatrix/Original ....

                  L: [[836.24616608 0. 656.428313 ]

                  [ 0. 845.62659357 439.05911096]

                  [ 0. 0. 1. ]]

                  R: [[836.24616608 0. 656.428313 ]

                  [ 0. 845.62659357 439.05911096]

                  [ 0. 0. 1. ]]

                  Average Epipolar Error is : 0.20598603925134382

                  Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.

                  <-------------Extrinsics calibration of right and tof ------------>

                  Reprojection error is 1.1929608204958633

                  <-------------Epipolar error of right and tof ------------>

                  Original intrinsics ....

                  L [[418.12308304 0. 328.2141565 ]

                  [ 0. 422.81329678 219.52955548]

                  [ 0. 0. 1. ]]

                  R: [[494.35192765 0. 321.84779556]

                  [ 0. 499.48351759 218.30442303]

                  [ 0. 0. 1. ]]

                  Intrinsics from the getOptimalNewCameraMatrix/Original ....

                  L: [[494.35192765 0. 321.84779556]

                  [ 0. 499.48351759 218.30442303]

                  [ 0. 0. 1. ]]

                  R: [[494.35192765 0. 321.84779556]

                  [ 0. 499.48351759 218.30442303]

                  S:test_run\depthai\calibrate.py:1066: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

                  calibration_handler.setDistortionCoefficients(stringToCam[camera], cam_info['dist_coeff'])

                  S:test_run\depthai\calibrate.py:1105: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)

                  calibration_handler.setCameraExtrinsics(stringToCam[camera], stringToCam[cam_info['extrinsics']['to_cam']], cam_info['extrinsics']['rotation_matrix'], cam_info['extrinsics']['translation'], specTranslation)Reprojection error threshold -> 1.1111111111111112

                  right Reprojection Error: 0.877539

                  Reprojection error threshold -> 1.0

                  tof Reprojection Error: 0.549252

                  Flashing Calibration data into

                  EEPROM VERSION being flashed is -> 7

                  EEPROM VERSION being flashed is -> 7

                  This screen was prompted at the end

                  I printed the board out and stamped it a flat surface, as of right now I would assume my question about the affect of having a printed version of the board works. Correct me if I wrong.

                  If the steps that I completed are correct then I am able to move forward with point cloud configurations, correct? Now that I have the camera calibrated, I can now add the correct extrinsic values to my code?

                    jakaskerl
                    Thank you for your help. Now that I am working with point clouds and calibration has helped and now that the values are at the right value. I have created my own script that uses the tof and color camera node, I am using tof to capture depth and using the color camera to display the pcd in color. Here are some of the things I have in my code (everything is order that it is in my code)

                    # Create ToF node

                    **tof = pipeline.create(dai.node.ToF)**
                    
                    **tofConfig = tof.initialConfig.get()**
                    
                    **tofConfig.enableOpticalCorrection = True**
                    
                    **tofConfig.enablePhaseShuffleTemporalFilter = True**
                    
                    **tofConfig.phaseUnwrappingLevel = 5**
                    
                    **tofConfig.phaseUnwrapErrorThreshold = 300**
                    
                    **tof.initialConfig.set(tofConfig)**

                    # Camera intrinsic parameters (ensure I am using the correct calibration values)

                    fx = 494.35192765  # Update with my calibrated value

                    fy = 499.48351759  # Update with my calibrated value

                    cx = 321.84779556  # Update with my calibrated value

                    cy = 218.30442303  # Update with my calibrated value

                    **intrinsic = o3d.camera.PinholeCameraIntrinsic(width=640, height=480, fx=fx, fy=fy, cx=cx, cy=cy)
                    **
                    I am using this functionality:
                    # Convert depth image to Open3D format

                                **depth_o3d = o3d.geometry.Image(depth_map)**
                    
                                **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**
                    
                                **# Generate and save colored point cloud**
                    
                                **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**
                    
                                    **color_o3d, depth_o3d, depth_scale=1000.0, depth_trunc=3.0, convert_rgb_to_intensity=False**
                    
                                **)**
                    
                                **color_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, intrinsic)**
                    
                                **color_pcd_filename = os.path.join(output_directory, f'color_pcd_{capture_count}.pcd')**
                    
                                **o3d.io.write_point_cloud(color_pcd_filename, color_pcd)**

                    I capture the images and save it to a file path. Then I run a python script that uses open3d
                    It load the point cloud from that file path and visualize it. My question still is the sparseness, have tried down sampling, icp and global registration, changing the voxel size and much more. But i am still getting unnecessary noise when i am viewing the pcd. Yes it look better with calibration but I am a tad lost on why it is still doing this:

                    Anything before the red should not be there. It seems to displaying the camera position and just distributing the points. Does the FOV also has an effect on this? With the second pic, anything after the red line should not be there. The points are trying to piece everything together but seem to be having an issue with it.


                      jakaskerl
                      Baseline for the camera is 20mm and the ToF is 20cm - 5m. How does this affect the AOC and FOV? Does the FOV and AOC affect point cloud and pcds distribution? If i could implement them within the code that captures the pcd, would that potentially help?

                      gdeanrexroth
                      I suggest you modify the tof config according to the docs. The phase unwrapping level for example, seems a bit high and will introduce a bunch of unnecessary noise.

                      Can't say for the host side o3d drawing; could you use the same approach as this example:
                      luxonis/depthai-experimentsblob/master/gen2-pointcloud/rgbd-pointcloud/main.py

                      or create pointcloud on device and use:
                      https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/

                      Thanks,
                      Jaka

                        jakaskerl
                        1. I have tried to changing the value for it based off the distance between the camera and the object. I am seeing what gives me the best result.
                        2. Main.py method/usage of open3d is a good reference, but the script i have to visuzlaize the pcd is this
                        import open3d as o3d
                        # Path to the .pcd file
                        pcd_file_path = r'S:gcolor_pcd_1.pcd'# Load the point cloudpcd = o3d.io.read_point_cloud(pcd_file_path)
                        # Visualize the point cloud
                        o3d.visualization.draw_geometries([pcd], window_name="ToF Point Cloud")
                        3. The on device point cloud method from that link does not work. I believe you have sent me a link to some an updated version of the script and it worked. I have moidified the script some previous times to test but the error of this:
                        inMessage = q.get()
                        Always pop up in my terminal.

                        Could it possibly be the way I am utilizing open3d? I have tried methods for visualizatio , which stitches two point cloud into one. But I am still left left with unnecessary noise. Adjusting the depthscale and depth_trunc does somewhat changes the output of my pcd whenever i visualize it. The snippet is apart of my code. I am using open3d here to convert the depth from the tof sensor and color camera. Currently the distance from the object to the camera is roughly 170 cm apart. For testing purposes I specifically want to only capture that distance, but I am still getting some type of noise at the type view and unnecessary noise from the side views:
                        # Convert depth image to Open3D format

                                    **depth_o3d = o3d.geometry.Image(depth_map)**
                        
                                    **color_o3d = o3d.geometry.Image(cv2.cvtColor(color_frame_resized, cv2.COLOR_BGR2RGB))**
                        
                                    **# Generate and save colored point cloud**
                        
                                    **rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(**
                        
                                        **color_o3d, depth_o3d, depth_scale=1700.0, depth_trunc=3.7, convert_rgb_to_intensity=False**
                        
                                    **)**

                        jakaskerl
                        The pcd that I captured using the tof sensor and color camera was tested in another way. I placed the captured pcd into a online website that converts it to a point cloud. I noticed that it displayed the same as my code.
                        I will look at the tof configuration again along with the point cloud code. I am somewhat close to figuring this out but I am still lost in some areas. Here is the screenshot of the website pcd.

                        The json below is the saved depth information of the captured pcd from my script.
                        "
                        {

                        **"class_name" : "PinholeCameraParameters",**
                        
                        **"extrinsic" :** 
                        
                        **[**
                        
                        	**0.97137093405355701,**
                        
                        	**0.080079400112124194,**
                        
                        	**0.22366447673603115,**
                        
                        	**0.0,**
                        
                        	**-0.17295492005240018,**
                        
                        	**0.88380774499819115,**
                        
                        	**0.43470733316897259,**
                        
                        	**0.0,**
                        
                        	**-0.16286529435575947,**
                        
                        	**-0.46094593995271788,**
                        
                        	**0.87235528102689741,**
                        
                        	**0.0,**
                        
                        	**-0.019386541654994344,**
                        
                        	**-0.24191574413614514,**
                        
                        	**1.0362066902315488,**
                        
                        	**1.0**
                        
                        **],**
                        
                        **"intrinsic" :** 
                        
                        **{**
                        
                        	**"height" : 1009,**
                        
                        	**"intrinsic_matrix" :** 
                        
                        	**[**
                        
                        		**873.8196324184986,**
                        
                        		**0.0,**
                        
                        		**0.0,**
                        
                        		**0.0,**
                        
                        		**873.8196324184986,**
                        
                        		**0.0,**
                        
                        		**959.5,**
                        
                        		**504.0,**
                        
                        		**1.0**
                        
                        	**],**
                        
                        	**"width" : 1920**
                        
                        **},**
                        
                        **"version_major" : 1,**
                        
                        **"version_minor" : 0**

                        }
                        "

                        jakaskerl
                        the object is 167 cm away from the front of the camera. Do i apply to this to the phaseunwrappinglevel? Would mine be set to 0 since my distance is less than 1.87 meters? I have tried that but it still doesn't remove the unnecessary noise. I have looked at most of your the links you have recommended and modified my code as so. Calibrating the camera, getting both intrinsic and extrinsic values helped out a lot. Also setting the tof configuration values based off luxonis documentations. Simple modifications based off documentation has helped out a lot. But the only issue that I am steady running into is the unnecessary noise.

                        Could my background(the reflective ceiling lights, the glossy floor and cabinets, etc)?