• DepthAI
  • How to send out 32bit ToF Depthmap through spout

Dear Luxonis Group,

I am trying to send out a depthmap with my ToF sensor through spout in 16 oder 32 bit
and would like to convert my Depthmap stream in Touchdesigner into a Pointcloud.

Somehow changing the unit8 to 32 or something similar kills my code.
Can anybody have a look and maybe tell me at white codelines I could do the change to send out a 32bit spout stream?

Link to my code

Thank you very much for your help 🙂

    BonkoKaradjov Apparently spout typically expects RGBA 8bit floats so pushing 32bit values through will not work.

    If you are using some specific framework that is capable of 32bit - FROM GPT:


    What changes are needed

    To output a 16-bit or 32-bit depth map (instead of an 8-bit colorized image), you need to remove or bypass the colorizeDepth step and ensure that your Syphon texture is created/handled with a high-bit-depth format. Below is a conceptual outline:

    1. Remove the colorizeDepth call:

         # alignedDepth = colorizeDepth(alignedDepth)
    2. Convert your raw depth data (in millimeters) to float:

      alignedDepth_32 = alignedDepth.astype(np.float32)

      Now you have a single-channel float array with shape (height, width).

    3. Expand to 4 channels (RGBA) if your Syphon/Metal functions require it:

      h, w = alignedDepth_32.shape
      alignedDepth_32_rgba = np.zeros((h, w, 4), dtype=np.float32)
      alignedDepth_32_rgba[..., 0] = alignedDepth_32  # R
      alignedDepth_32_rgba[..., 1] = alignedDepth_32  # G
      alignedDepth_32_rgba[..., 2] = alignedDepth_32  # B
      alignedDepth_32_rgba[..., 3] = 1.0             # A
    4. Create the Metal texture in a float format if possible. For example:

      texture_tof = create_mtl_texture(
          syphon_server_tof.device,
          640,
          480,
          pixel_format="RGBA32Float"  # or a similar float-friendly format
      )

      (Exact format strings may vary based on your Syphon library.)

    5. Copy the float data to that texture:

      copy_image_to_mtl_texture(alignedDepth_32_rgba, texture_tof)
    6. Publish via Syphon:

      syphon_server_tof.publish_frame_texture(texture_tof)

    This way, you’re sending a raw float texture rather than an 8-bit color map, which allows TouchDesigner (or another application) to interpret the depth data for building a point cloud.


    Thanks,
    Jaka

    Dear @jakaskerl ,

    thank you very much for your help.
    Mostly I also use gpt to look up some answers, sometimes the code is confused because of libraries gpt makes of 😃

    In this case, I could achieve somehow a inbetween send but this is my outcome
    and decided to go through all possible metal colorspaces:
    Sadly my stream not seems to be 32bit and gives out some interesting videostreams, I
    don't know which one could be the right one to create a pointlcloud into TD or if I am doing something else wrong.

    Link to a video preview switching all possible color spaces from mtlpixeformat website

    (( Website with mptlpixelformats ))

    Link to the code I worked on

    Thank again you very much for your help 🙂

      BonkoKaradjov
      Was this device calibrated? In the video, it looks like you are manually setting the offset.. Could you sent the calibration dump so we can check?
      Also the code you sent looks different from the one used to create the video.

      Thanks,
      Jaka

      Dear @jakaskerl ,

      I had sent my 1MP ToF sensor to Luxonis and payed extra 300€ with the goal to get a aligned 12MP camera to ToF sensor and send out, as in my last post a (minimum) 1352x1014 for RGB + minimum 640x480 (ToF) stream via ndi or syphon.
      If I got it right Luxonis answered me that the sensor should had been calibrated.
      Therefore I am scared, because of my already running production to do something wrong if I run the calibration script from Luxonis, that is on the website.

      My hope was to get a as perfect as possible alignement (when person gets near, left, right, and far from the cam etc.) between both cameras with minumum latency via PoE.
      Everything, except the alignement (and the point cloud data for Touchdesigner) out from the I could get with these two scripts that I worked out for a long time, because I am slow in coding:

      The alignment problem I really tryed in so many scripts and changes and also had emails with Erik who really tried to help me.
      I have maybe over 30-40 tries and scripts that I modified.

      The following links show videos where I bound in the original scripts you asked me to implement without the custom transform:

      ToF-Alignment Script Video from Luxonis Website (original)
      Link to Script

      ToF-Alignment Script Video from Luxonis Website (rotated 180 degrees after alignment)
      Link to Script

      ToF-Alignment Script sent from Luxonis by Mail (original)
      Link to Script
      - one unknown error: freezes often after a few seconds for 30-60 seconds

      ToF-Alignment Script sent from Luxonis by Mail (rotated 180 degrees after alignment)
      Link to Script
      - one unknown error: freezes often after a few seconds for 30-60 seconds

      I will try 3 or 4 more days if I can find at least my workaround in aligning the cameras in Touchdesigner and somehow sending out the ndi/syphon pointcloud stream data to create a pointcloud in TD and if I can not use the sensor for the production I hope Luxonis can give me a refund for the sensor and the custom built so that I can switch to my first attempt with the Orbecc Femto Mega because my skills seam to be to bad to work with Luxonis sensors at the moment, although it was quite fun to start to understand them and have small successes but it feels wrong to pay 750€ for the Oak ToF 12MP and work almost 30 days on trying to get the alignment right etc.

      Thank you very much for your help and sorry that I am so complicated, its just my lack of knowledge in coding with depthai and opencv etc. and that I am disappointed of myself that I will probably not achieve the simple goal I could get with Kinect or Femto Mega in a much easier way and that I invested so much money, almost the whole money I will get for this art project I need to use this sensor for.

      @jakaskerl And here the calibration dump 🙂

      ....main/examples/calibration/calibration_dump.py

      Is EEPROM available: True
      User calibration: {
        "batchName": "",
        "batchTime": 1724403929,
        "boardConf": "IR-C80M00-00",
        "boardCustom": "",
        "boardName": "EL2086",
        "boardOptions": 6401,
        "boardRev": "R3M2E3",
        "cameraData": [
          [
            2,
            {
              "cameraType": 0,
              "distortionCoeff": [
                -1.906318187713623,
                2.000298500061035,
                -0.002747641410678625,
                -0.0008416209020651877,
                -0.512755811214447,
                -2.004528284072876,
                2.276949644088745,
                -0.7380069494247437,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.9999335408210754,
                    -0.008848309516906738,
                    0.007388767320662737
                  ],
                  [
                    0.008809356018900871,
                    0.999947190284729,
                    0.00528801279142499
                  ],
                  [
                    -0.007435167208313942,
                    -0.0052225710824131966,
                    0.9999586939811707
                  ]
                ],
                "specTranslation": {
                  "x": 1.7381999492645264,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": 0,
                "translation": {
                  "x": 1.9038344621658325,
                  "y": 0.16774022579193115,
                  "z": 0.18684668838977814
                }
              },
              "height": 3040,
              "intrinsicMatrix": [
                [
                  3114.211669921875,
                  0.0,
                  2067.387939453125
                ],
                [
                  0.0,
                  3120.09423828125,
                  1523.5755615234375
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 4056
            }
          ],
          [
            1,
            {
              "cameraType": 0,
              "distortionCoeff": [
                22.9776611328125,
                -35.2191276550293,
                -0.0008518987451680005,
                -0.0005938760004937649,
                13.29187297821045,
                22.328569412231445,
                -33.41155242919922,
                11.978973388671875,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.999860405921936,
                    0.007616722956299782,
                    -0.014872560277581215
                  ],
                  [
                    -0.007512797601521015,
                    0.9999470710754395,
                    0.007031152956187725
                  ],
                  [
                    0.014925327152013779,
                    -0.006918436847627163,
                    0.9998646974563599
                  ]
                ],
                "specTranslation": {
                  "x": 2.0,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": 2,
                "translation": {
                  "x": 1.988970160484314,
                  "y": -0.0013319612480700016,
                  "z": 0.0029759984463453293
                }
              },
              "height": 800,
              "intrinsicMatrix": [
                [
                  801.0270385742188,
                  0.0,
                  609.644287109375
                ],
                [
                  0.0,
                  802.9794921875,
                  392.1715393066406
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 1280
            }
          ],
          [
            0,
            {
              "cameraType": 0,
              "distortionCoeff": [
                -1.9164514541625977,
                -29.196409225463867,
                -0.0034538470208644867,
                -0.0005991361103951931,
                87.63078308105469,
                -1.9397814273834229,
                -29.01816177368164,
                87.28975677490234,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.0,
                    0.0,
                    0.0
                  ],
                  [
                    0.0,
                    0.0,
                    0.0
                  ],
                  [
                    0.0,
                    0.0,
                    0.0
                  ]
                ],
                "specTranslation": {
                  "x": -0.0,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": -1,
                "translation": {
                  "x": 0.0,
                  "y": 0.0,
                  "z": 0.0
                }
              },
              "height": 480,
              "intrinsicMatrix": [
                [
                  473.2501525878906,
                  0.0,
                  319.0169982910156
                ],
                [
                  0.0,
                  473.8380432128906,
                  228.83583068847656
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 640
            }
          ]
        ],
        "deviceName": "",
        "hardwareConf": "F1-FV00-BC000",
        "housingExtrinsics": {
          "rotationMatrix": [
            [
              0.9998946189880371,
              0.0140096265822649,
              0.00380001706071198
            ],
            [
              -0.014019245281815529,
              0.9998985528945923,
              0.0025167076382786036
            ],
            [
              -0.0037643732503056526,
              -0.0025697157252579927,
              0.9999896287918091
            ]
          ],
          "specTranslation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          },
          "toCameraSocket": 0,
          "translation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          }
        },
        "imuExtrinsics": {
          "rotationMatrix": [
            [
              0.0,
              0.0,
              0.0
            ],
            [
              0.0,
              0.0,
              0.0
            ],
            [
              0.0,
              0.0,
              0.0
            ]
          ],
          "specTranslation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          },
          "toCameraSocket": -1,
          "translation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          }
        },
        "miscellaneousData": [],
        "productName": "FFC-4P",
        "stereoEnableDistortionCorrection": true,
        "stereoRectificationData": {
          "leftCameraSocket": 1,
          "rectifiedRotationLeft": [
            [
              0.9998863935470581,
              0.006936723832041025,
              -0.013381202705204487
            ],
            [
              -0.006890007760375738,
              0.999970018863678,
              0.003534123534336686
            ],
            [
              0.01340531650930643,
              -0.0034415253903716803,
              0.999904215335846
            ]
          ],
          "rectifiedRotationRight": [
            [
              0.9999986290931702,
              -0.0006696729105897248,
              0.0014962489949539304
            ],
            [
              0.0006748876767233014,
              0.9999936819076538,
              -0.0034874198026955128
            ],
            [
              -0.0014939040411263704,
              0.0034884249325841665,
              0.9999927878379822
            ]
          ],
          "rightCameraSocket": 2
        },
        "stereoUseSpecTranslation": false,
        "version": 7,
        "verticalCameraSocket": -1
      }
      Factory calibration: {
        "batchName": "",
        "batchTime": 1724403929,
        "boardConf": "IR-C80M00-00",
        "boardCustom": "",
        "boardName": "EL2086",
        "boardOptions": 6401,
        "boardRev": "R3M2E3",
        "cameraData": [
          [
            2,
            {
              "cameraType": 0,
              "distortionCoeff": [
                -1.906318187713623,
                2.000298500061035,
                -0.002747641410678625,
                -0.0008416209020651877,
                -0.512755811214447,
                -2.004528284072876,
                2.276949644088745,
                -0.7380069494247437,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.9999335408210754,
                    -0.008848309516906738,
                    0.007388767320662737
                  ],
                  [
                    0.008809356018900871,
                    0.999947190284729,
                    0.00528801279142499
                  ],
                  [
                    -0.007435167208313942,
                    -0.0052225710824131966,
                    0.9999586939811707
                  ]
                ],
                "specTranslation": {
                  "x": 1.7381999492645264,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": 0,
                "translation": {
                  "x": 1.9038344621658325,
                  "y": 0.16774022579193115,
                  "z": 0.18684668838977814
                }
              },
              "height": 3040,
              "intrinsicMatrix": [
                [
                  3114.211669921875,
                  0.0,
                  2067.387939453125
                ],
                [
                  0.0,
                  3120.09423828125,
                  1523.5755615234375
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 4056
            }
          ],
          [
            1,
            {
              "cameraType": 0,
              "distortionCoeff": [
                22.9776611328125,
                -35.2191276550293,
                -0.0008518987451680005,
                -0.0005938760004937649,
                13.29187297821045,
                22.328569412231445,
                -33.41155242919922,
                11.978973388671875,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.999860405921936,
                    0.007616722956299782,
                    -0.014872560277581215
                  ],
                  [
                    -0.007512797601521015,
                    0.9999470710754395,
                    0.007031152956187725
                  ],
                  [
                    0.014925327152013779,
                    -0.006918436847627163,
                    0.9998646974563599
                  ]
                ],
                "specTranslation": {
                  "x": 2.0,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": 2,
                "translation": {
                  "x": 1.988970160484314,
                  "y": -0.0013319612480700016,
                  "z": 0.0029759984463453293
                }
              },
              "height": 800,
              "intrinsicMatrix": [
                [
                  801.0270385742188,
                  0.0,
                  609.644287109375
                ],
                [
                  0.0,
                  802.9794921875,
                  392.1715393066406
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 1280
            }
          ],
          [
            0,
            {
              "cameraType": 0,
              "distortionCoeff": [
                -1.9164514541625977,
                -29.196409225463867,
                -0.0034538470208644867,
                -0.0005991361103951931,
                87.63078308105469,
                -1.9397814273834229,
                -29.01816177368164,
                87.28975677490234,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0,
                0.0
              ],
              "extrinsics": {
                "rotationMatrix": [
                  [
                    0.0,
                    0.0,
                    0.0
                  ],
                  [
                    0.0,
                    0.0,
                    0.0
                  ],
                  [
                    0.0,
                    0.0,
                    0.0
                  ]
                ],
                "specTranslation": {
                  "x": -0.0,
                  "y": -0.0,
                  "z": -0.0
                },
                "toCameraSocket": -1,
                "translation": {
                  "x": 0.0,
                  "y": 0.0,
                  "z": 0.0
                }
              },
              "height": 480,
              "intrinsicMatrix": [
                [
                  473.2501525878906,
                  0.0,
                  319.0169982910156
                ],
                [
                  0.0,
                  473.8380432128906,
                  228.83583068847656
                ],
                [
                  0.0,
                  0.0,
                  1.0
                ]
              ],
              "lensPosition": 0,
              "specHfovDeg": 71.86000061035156,
              "width": 640
            }
          ]
        ],
        "deviceName": "",
        "hardwareConf": "F1-FV00-BC000",
        "housingExtrinsics": {
          "rotationMatrix": [
            [
              0.9998946189880371,
              0.0140096265822649,
              0.00380001706071198
            ],
            [
              -0.014019245281815529,
              0.9998985528945923,
              0.0025167076382786036
            ],
            [
              -0.0037643732503056526,
              -0.0025697157252579927,
              0.9999896287918091
            ]
          ],
          "specTranslation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          },
          "toCameraSocket": 0,
          "translation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          }
        },
        "imuExtrinsics": {
          "rotationMatrix": [
            [
              0.0,
              0.0,
              0.0
            ],
            [
              0.0,
              0.0,
              0.0
            ],
            [
              0.0,
              0.0,
              0.0
            ]
          ],
          "specTranslation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          },
          "toCameraSocket": -1,
          "translation": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
          }
        },
        "miscellaneousData": [],
        "productName": "FFC-4P",
        "stereoEnableDistortionCorrection": true,
        "stereoRectificationData": {
          "leftCameraSocket": 1,
          "rectifiedRotationLeft": [
            [
              0.9998863935470581,
              0.006936723832041025,
              -0.013381202705204487
            ],
            [
              -0.006890007760375738,
              0.999970018863678,
              0.003534123534336686
            ],
            [
              0.01340531650930643,
              -0.0034415253903716803,
              0.999904215335846
            ]
          ],
          "rectifiedRotationRight": [
            [
              0.9999986290931702,
              -0.0006696729105897248,
              0.0014962489949539304
            ],
            [
              0.0006748876767233014,
              0.9999936819076538,
              -0.0034874198026955128
            ],
            [
              -0.0014939040411263704,
              0.0034884249325841665,
              0.9999927878379822
            ]
          ],
          "rightCameraSocket": 2
        },
        "stereoUseSpecTranslation": false,
        "version": 7,
        "verticalCameraSocket": -1
      }
      User calibration raw: [7, 0, 170, 85, 3, 0, 0, 0, 0, 69, 76, 50, 48, 56, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 51, 77, 50, 69, 51, 0, 0, 0, 0, 142, 248, 127, 63, 117, 77, 227, 59, 213, 60, 91, 188, 147, 197, 225, 187, 9, 254, 127, 63, 193, 156, 103, 59, 249, 161, 91, 60, 55, 139, 97, 187, 185, 249, 127, 63, 233, 255, 127, 63, 253, 140, 47, 186, 201, 29, 196, 58, 242, 234, 48, 58, 150, 255, 127, 63, 50, 141, 100, 187, 26, 207, 195, 186, 15, 158, 100, 59, 135, 255, 127, 63, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 32, 3, 0, 5, 0, 187, 65, 72, 68, 0, 0, 0, 0, 60, 105, 24, 68, 0, 0, 0, 0, 176, 190, 72, 68, 245, 21, 196, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 64, 210, 183, 65, 99, 224, 12, 194, 245, 81, 95, 186, 88, 174, 27, 186, 131, 171, 84, 65, 233, 160, 178, 65, 110, 165, 5, 194, 224, 169, 63, 65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 1, 218, 246, 127, 63, 180, 149, 249, 59, 10, 172, 115, 188, 234, 45, 246, 187, 136, 252, 127, 63, 150, 101, 230, 59, 92, 137, 116, 60, 14, 180, 226, 187, 34, 247, 127, 63, 147, 150, 254, 63, 52, 149, 174, 186, 248, 8, 67, 59, 0, 0, 0, 192, 0, 0, 0, 0, 0, 0, 0, 0, 2, 224, 1, 128, 2, 0, 5, 160, 236, 67, 0, 0, 0, 0, 45, 130, 159, 67, 0, 0, 0, 0, 69, 235, 236, 67, 249, 213, 100, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 72, 78, 245, 191, 63, 146, 233, 193, 240, 89, 98, 187, 88, 15, 29, 186, 246, 66, 175, 66, 194, 74, 248, 191, 50, 37, 232, 193, 91, 148, 174, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 224, 11, 216, 15, 0, 99, 163, 66, 69, 0, 0, 0, 0, 53, 54, 1, 69, 0, 0, 0, 0, 130, 1, 67, 69, 107, 114, 190, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 60, 2, 244, 191, 228, 4, 0, 64, 198, 17, 52, 187, 57, 160, 92, 186, 247, 67, 3, 191, 49, 74, 0, 192, 139, 185, 17, 64, 6, 238, 60, 191, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 2, 165, 251, 127, 63, 128, 248, 16, 188, 121, 29, 242, 59, 30, 85, 16, 60, 138, 252, 127, 63, 17, 71, 173, 59, 180, 162, 243, 187, 26, 34, 171, 187, 75, 253, 127, 63, 217, 176, 243, 63, 24, 196, 43, 62, 189, 84, 63, 62, 86, 125, 222, 191, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 70, 70, 67, 45, 52, 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 82, 45, 67, 56, 48, 77, 48, 48, 45, 48, 48, 0, 0, 0, 0, 70, 49, 45, 70, 86, 48, 48, 45, 66, 67, 48, 48, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 217, 80, 200, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 249, 127, 63, 162, 136, 101, 60, 181, 9, 121, 59, 250, 176, 101, 188, 90, 249, 127, 63, 89, 239, 36, 59, 180, 179, 118, 187, 173, 104, 40, 187, 82, 255, 127, 63, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 3, 255]
      Factory calibration raw: [7, 0, 170, 85, 3, 0, 0, 0, 0, 69, 76, 50, 48, 56, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 51, 77, 50, 69, 51, 0, 0, 0, 0, 142, 248, 127, 63, 117, 77, 227, 59, 213, 60, 91, 188, 147, 197, 225, 187, 9, 254, 127, 63, 193, 156, 103, 59, 249, 161, 91, 60, 55, 139, 97, 187, 185, 249, 127, 63, 233, 255, 127, 63, 253, 140, 47, 186, 201, 29, 196, 58, 242, 234, 48, 58, 150, 255, 127, 63, 50, 141, 100, 187, 26, 207, 195, 186, 15, 158, 100, 59, 135, 255, 127, 63, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 32, 3, 0, 5, 0, 187, 65, 72, 68, 0, 0, 0, 0, 60, 105, 24, 68, 0, 0, 0, 0, 176, 190, 72, 68, 245, 21, 196, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 64, 210, 183, 65, 99, 224, 12, 194, 245, 81, 95, 186, 88, 174, 27, 186, 131, 171, 84, 65, 233, 160, 178, 65, 110, 165, 5, 194, 224, 169, 63, 65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 1, 218, 246, 127, 63, 180, 149, 249, 59, 10, 172, 115, 188, 234, 45, 246, 187, 136, 252, 127, 63, 150, 101, 230, 59, 92, 137, 116, 60, 14, 180, 226, 187, 34, 247, 127, 63, 147, 150, 254, 63, 52, 149, 174, 186, 248, 8, 67, 59, 0, 0, 0, 192, 0, 0, 0, 0, 0, 0, 0, 0, 2, 224, 1, 128, 2, 0, 5, 160, 236, 67, 0, 0, 0, 0, 45, 130, 159, 67, 0, 0, 0, 0, 69, 235, 236, 67, 249, 213, 100, 67, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 72, 78, 245, 191, 63, 146, 233, 193, 240, 89, 98, 187, 88, 15, 29, 186, 246, 66, 175, 66, 194, 74, 248, 191, 50, 37, 232, 193, 91, 148, 174, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 224, 11, 216, 15, 0, 99, 163, 66, 69, 0, 0, 0, 0, 53, 54, 1, 69, 0, 0, 0, 0, 130, 1, 67, 69, 107, 114, 190, 68, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 63, 60, 2, 244, 191, 228, 4, 0, 64, 198, 17, 52, 187, 57, 160, 92, 186, 247, 67, 3, 191, 49, 74, 0, 192, 139, 185, 17, 64, 6, 238, 60, 191, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 82, 184, 143, 66, 0, 2, 165, 251, 127, 63, 128, 248, 16, 188, 121, 29, 242, 59, 30, 85, 16, 60, 138, 252, 127, 63, 17, 71, 173, 59, 180, 162, 243, 187, 26, 34, 171, 187, 75, 253, 127, 63, 217, 176, 243, 63, 24, 196, 43, 62, 189, 84, 63, 62, 86, 125, 222, 191, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 70, 70, 67, 45, 52, 80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 73, 82, 45, 67, 56, 48, 77, 48, 48, 45, 48, 48, 0, 0, 0, 0, 70, 49, 45, 70, 86, 48, 48, 45, 66, 67, 48, 48, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 217, 80, 200, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 249, 127, 63, 162, 136, 101, 60, 181, 9, 121, 59, 250, 176, 101, 188, 90, 249, 127, 63, 89, 239, 36, 59, 180, 179, 118, 187, 173, 104, 40, 187, 82, 255, 127, 63, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 3, 255]
      
      Process finished with exit code 0

        BonkoKaradjov
        Thanks, can you also send a left camera frame and ToF depth frame so we can test your calibration on our side?
        edit: images should be taken at the same time of course
        edit2: i see I'm mixing the threads a bit, sorry for that, let's first fix alignment and the move on to the depth streaming

        Thanks,
        Jaka

        Dear @jakaskerl,

        thank you very much for your help.
        To get a better quality of the maps I could record the images in Touchdesigner through sending ndi for example of the third additional stream (cam_a, cam_b, and cam_c).
        Or record it directly from hard disk, this would probably give you the most exact output.
        But I am not shure how to implement the third camera synced to the other two to produce the output.

        Is there any script example out there that I can use for this with the ToF sensor to record or send out the
        unaligned 3 videostreams through syphon, ndi or record on disk?
        I am slow in coding such things and it would be really helpful if there is a code example instead of implementing this into the two scripts above and probably sending you the wrong data.

        I would also switch the space where I record the video so that you get more angles and data in the depth.

        Thank you very much for your help.
        With kind regards
        Bonko

          BonkoKaradjov
          Should be good:

          import os
          import time
          import cv2
          import depthai as dai
          import numpy as np
          from datetime import timedelta
          
          def create_pipeline():
              pipeline = dai.Pipeline()
          
              # Create ToF node
              tof = pipeline.create(dai.node.ToF)
              tof.setNumShaves(4)
          
              # Configure ToF
              tofConfig = tof.initialConfig.get()
              tofConfig.enableFPPNCorrection = True
              tofConfig.enableOpticalCorrection = True
              tofConfig.enableWiggleCorrection = True
              tofConfig.enableTemperatureCorrection = True
              tofConfig.phaseUnwrappingLevel = 4
              tof.initialConfig.set(tofConfig)
          
              # Create ToF camera node
              cam_tof = pipeline.create(dai.node.Camera)
              cam_tof.setFps(20)
              cam_tof.setImageOrientation(dai.CameraImageOrientation.ROTATE_180_DEG)
              cam_tof.setBoardSocket(dai.CameraBoardSocket.CAM_A)
              cam_tof.raw.link(tof.input)
          
              # Create RGB camera nodes
              colorLeft = pipeline.create(dai.node.ColorCamera)
              colorRight = pipeline.create(dai.node.ColorCamera)
          
              # Configure RGB cameras
              colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
              colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
              colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
              colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
              colorLeft.setFps(20)
              colorRight.setFps(20)
              colorLeft.setInterleaved(False)
              colorRight.setInterleaved(False)
          
              # Create Sync node
              sync = pipeline.create(dai.node.Sync)
              sync.setSyncThreshold(timedelta(milliseconds=50))
          
              # Link outputs to Sync
              tof.depth.link(sync.inputs["tof"])
              colorLeft.isp.link(sync.inputs["left"])
              colorRight.isp.link(sync.inputs["right"])
          
              # Create output
              xout = pipeline.create(dai.node.XLinkOut)
              xout.setStreamName("sync_out")
              sync.out.link(xout.input)
          
              return pipeline
          
          def colorize_depth_for_display(depth_frame):
              depth_normalized = cv2.normalize(depth_frame, None, 0, 255, cv2.NORM_MINMAX)
              depth_colored = cv2.applyColorMap(np.uint8(depth_normalized), cv2.COLORMAP_JET)
              return depth_colored
          
          if __name__ == '__main__':
              pipeline = create_pipeline()
          
              with dai.Device(pipeline) as device:
                  q_sync = device.getOutputQueue(name="sync_out", maxSize=4, blocking=False)
                  frame_counter = 0
                  save_dir = None
          
                  while True:
                      msgGrp = q_sync.get()
                      
                      frames = {}
                      for name, msg in msgGrp:
                          frames[name] = msg.getCvFrame()
          
                      if len(frames) == 3:  # We expect 3 frames (ToF, left RGB, right RGB)
                          # Get raw frames
                          tof_depth = frames['tof']
                          left_rgb = frames['left']
                          right_rgb = frames['right']
          
                          # Display frames (colorize depth only for display)
                          cv2.imshow("ToF Depth", colorize_depth_for_display(tof_depth))
                          cv2.imshow("Left RGB", left_rgb)
                          cv2.imshow("Right RGB", right_rgb)
          
                          key = cv2.waitKey(1)
                          if key == ord('q'):
                              break
                          elif key == ord('c'):
                              # Create new directory for saving if it doesn't exist
                              if save_dir is None:
                                  timestamp = time.strftime("%Y%m%d_%H%M%S")
                                  save_dir = os.path.join("data", timestamp)
                                  os.makedirs(save_dir, exist_ok=True)
                                  print(f"Created directory: {save_dir}")
          
                              # Save raw frames
                              tof_filename = os.path.join(save_dir, f"tof_{frame_counter:06d}.npy")
                              left_filename = os.path.join(save_dir, f"left_{frame_counter:06d}.png")
                              right_filename = os.path.join(save_dir, f"right_{frame_counter:06d}.png")
          
                              # Save ToF as raw numpy array, RGB images as PNG
                              np.save(tof_filename, tof_depth)
                              cv2.imwrite(left_filename, left_rgb)
                              cv2.imwrite(right_filename, right_rgb)
                              
                              print(f"Saved frame {frame_counter}")
                              frame_counter += 1
          
              cv2.destroyAllWindows()
              print("Program finished")

          Thanks,
          Jaka

          Dear @jakaskerl ,

          Thank you very much for your help.
          Here are the files I got in a bigger room:
          Link to Photofiles

          As mentioned I am trying to achive a sendout of color, depth and pointcloud with as less latency as possible and as good Rgb Quality as possible by keeping my osc rgb and tof camera control I implemented in my last script.
          Link to last script

          Here in parallel unfortunately I had to pay the price, becaus my production is already running => my approach with Orbbec Femto Mega.

          To clear out what I am trying. If quality and latentcy is ok and the alignment as prefect as those from the Femto Mega I would slice the pointcloud as in the video example and use the combination of depthmap and sliced pointcloud as alpha channel to cutout the parts that have to become visible.

          Link to my Femto Mega approach

          Thank you again very much for your help.
          As an information:

          Luxonis could become serious competitor to the Femto Mega or even the Femto Mega 1 with the 12MP Tof sensor or even with the Oak 4 Pro (with ToF)
          because Femtos Color Cameras have not such an easy live control + not such a good quality with their color cameras.

          They are only more practical at the moment because of their direct connection through the Ip adress in Touchdesigner and their pretty exact point cloud.

          For visual arts a good Oak 4 Pro with Tof option that is well aligned could beat Orbbec in speed (2.5 Gbs) and quality, especially if you bind in another ToF sensor.

          Thats why I am not shure, that if I don't achieve the quality with my custom build, if it would be better to exchange my sensor with another camera from OaK.

          Anyway I am very thankul for all your help and please be honest If you think if I could reach Femto Mega with my custom Oak Tof or if I should redirect my money probably to something new like the Oak 4 Pro (wide) or the interesting Oak Thermal sensor.

            Hi BonkoKaradjov
            Team has had a chance to look at your scripts.
            Issue was that your calibration was done using rotated TOF frame not RGB, so the alignment has to also be done for rotated TOF and not rotated RGB frame, otherwise the rot matrix is incorrect.

            Code for properly aligning the streams:

            import os 
            import pathlib
            import numpy as np
            import cv2
            import depthai as dai
            from numba import jit, prange
            
            @jit(nopython=True, parallel=True)
            def reprojection(depth_image, depth_camera_intrinsics, camera_extrinsics, color_camera_intrinsics, depth_image_show = None):
                height = len(depth_image)
                width = len(depth_image[0])
                if depth_image_show is not None:
                    image = np.zeros((height, width), np.uint8)
                else:
                    image = np.zeros((height, width), np.uint16)
                if(camera_extrinsics[0][3] > 0):
                    sign = 1
                else:
                    sign = -1
                for i in prange(0, height):
                    for j in prange(0, width):
                        if sign == 1:
                            # Reverse the order of the pixels
                            j = width - j - 1
                        d = depth_image[i][j]
                        if(d == 0):
                            continue
                        # Convert pixel to 3d point
                        x = (j - depth_camera_intrinsics[0][2]) * d / depth_camera_intrinsics[0][0]
                        y = (i - depth_camera_intrinsics[1][2]) * d / depth_camera_intrinsics[1][1]
                        z = d
            
                        # Move the point to the camera frame
                        x1 = camera_extrinsics[0][0] * x + camera_extrinsics[0][1] * y + camera_extrinsics[0][2] * z + camera_extrinsics[0][3]
                        y1 = camera_extrinsics[1][0] * x + camera_extrinsics[1][1] * y + camera_extrinsics[1][2] * z + camera_extrinsics[1][3]
                        z1 = camera_extrinsics[2][0] * x + camera_extrinsics[2][1] * y + camera_extrinsics[2][2] * z + camera_extrinsics[2][3]
            
                        u = color_camera_intrinsics[0][0] * (x1  / z1) + color_camera_intrinsics[0][2]
                        v = color_camera_intrinsics[1][1] * (y1  / z1) + color_camera_intrinsics[1][2]
                        int_u = round(u)
                        int_v = round(v)
                        if int_u >= 0 and int_u < (len(image[0]) - 1) and int_v >= 0 and int_v < len(image):
                            if depth_image_show is not None:
                                image[int_v][int_u] = depth_image_show[i][j][0]
                                image[int_v][int_u + sign] = depth_image_show[i][j][0]
                            else:
                                image[int_v][int_u] = z1
                                image[int_v][int_u + sign] = z1
                return image
            
            def colorizeDepth(frameDepth):
                invalidMask = frameDepth == 0
                # Log the depth, minDepth and maxDepth
                try:
                    minDepth = np.percentile(frameDepth[frameDepth != 0], 3)
                    maxDepth = np.percentile(frameDepth[frameDepth != 0], 95)
                    logDepth = np.log(frameDepth, where=frameDepth != 0)
                    logMinDepth = np.log(minDepth)
                    logMaxDepth = np.log(maxDepth)
                    np.nan_to_num(logDepth, copy=False, nan=logMinDepth)
                    # Clip the values to be in the 0-255 range
                    logDepth = np.clip(logDepth, logMinDepth, logMaxDepth)
            
                    # Interpolate only valid logDepth values, setting the rest based on the mask
                    depthFrameColor = np.interp(logDepth, (logMinDepth, logMaxDepth), (0, 255))
                    depthFrameColor = np.nan_to_num(depthFrameColor)
                    depthFrameColor = depthFrameColor.astype(np.uint8)
                    depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_JET)
                    # Set invalid depth pixels to black
                    depthFrameColor[invalidMask] = 0
                except IndexError:
                    # Frame is likely empty
                    depthFrameColor = np.zeros((frameDepth.shape[0], frameDepth.shape[1], 3), dtype=np.uint8)
                except Exception as e:
                    raise e
                return depthFrameColor
            
            def get_Calibration_Data(calibData: dai.CalibrationHandler, TOF_SOCKET, ALIGN_SOCKET, depthSize = (640, 480), rgbSize = (1920, 1080)):
                M1 = np.array(calibData.getCameraIntrinsics(TOF_SOCKET, *depthSize))
                D1 = np.array(calibData.getDistortionCoefficients(TOF_SOCKET))
                M2 = np.array(calibData.getCameraIntrinsics(ALIGN_SOCKET, *rgbSize))
                D2 = np.array(calibData.getDistortionCoefficients(ALIGN_SOCKET))
                T = (
                    np.array(calibData.getCameraTranslationVector(TOF_SOCKET, ALIGN_SOCKET, False))
                    * 10
                )  # to mm for matching the depth
                R = np.array(calibData.getCameraExtrinsics(TOF_SOCKET, ALIGN_SOCKET, False))[
                    0:3, 0:3
                ]
                TARGET_MATRIX = M1
                return M1, D1, M2, D2, TARGET_MATRIX, R, T
            
            def getAlignedDepth(frameDepth, TARGET_MATRIX, M2, R, T, rgbSize = (1920, 1080)):
                R_180 = np.array([[-1., 0., 0.],
                                  [0., -1., 0.],
                                  [0., 0., 1.]])
                combinedExtrinsics = np.eye(4)
                combinedExtrinsics[0:3, 0:3] =  R
                combinedExtrinsics[0:3, 3] =  T
                depthAligned = reprojection(frameDepth, TARGET_MATRIX, combinedExtrinsics, TARGET_MATRIX)
                mapX, mapY = cv2.initUndistortRectifyMap(TARGET_MATRIX, None, np.eye(3), M2, rgbSize, cv2.CV_32FC1)
                outputAligned = cv2.remap(depthAligned, mapX, mapY, cv2.INTER_NEAREST)
                return outputAligned
            import argparse
            parser = argparse.ArgumentParser()
            parser.add_argument("-path", help="Path to the dataset, were all files are.")
            args = parser.parse_args()
            
            ### Write down, which sockets you wanna align to ####
            ALIGN_SOCKET = dai.CameraBoardSocket.CAM_C
            TOF_SOCKET = dai.CameraBoardSocket.CAM_A
            
            path = args.path
            calibData = dai.CalibrationHandler(path + "/calib.json")
            for files in os.listdir(path):
                ### LOADING ALL images which are used ###
                if files.split(".")[-1] == "json":
                    continue
                index = files.split(".")[0].split("_")[1]
                depth = np.load(path + f"/tof_{index}.npy")
                left = cv2.imread(path + f"/left_{index}.png")
                right = cv2.imread(path + f"/right_{index}.png")
            
            
                ### LOGIC ####
                # Logic is following:
            
                # Since device was calibrated with the rotated depth, the depth frame MUST be rotated before aligned
                depth = cv2.rotate(depth, cv2.ROTATE_180)
            
                # From data you have, get all needed matrices
                M1, D1, M2, D2, TARGET_MATRIX, R, T = get_Calibration_Data(calibData, TOF_SOCKET, ALIGN_SOCKET, (depth.shape[1], depth.shape[0]), (right.shape[1], right.shape[0]))
            
                #Align rotated depth to the rotated RGB image
                alignedDepth = getAlignedDepth(depth, TARGET_MATRIX, M2, R, T, (right.shape[1], right.shape[0]))
            
                #Rotate back both depth and RGB
                right = cv2.rotate(right, cv2.ROTATE_180)
                alignedDepth = cv2.rotate(alignedDepth, cv2.ROTATE_180)
            
            
                blended = cv2.addWeighted(right, 0.5, colorizeDepth(alignedDepth), 0.5, 0)
                cv2.imshow("Blended", blended)
                cv2.waitKey(0)

            Thanks,
            Jaka

            21 days later

            Dear Jaka, thank you very much for your help,

            To be honest, I lost the job, the camera was planned in, and all the money and Luxonis does not want take back this custom build brick, even ignores my last email..but ok.
            At least I payed extra 300 € for a calculated sensor I asked for and received a ......

            But I don't want to feel so shitty that I posses a 800 € brick working as the most expensive 12MP camera ever selled!!

            I am afraid I will never calibrate that sensor right. Your script, if I understood it correctly needs the calibration pictures with a charuco board first, which I tried.....

            After following the calibration instructions here:
            LINK Luxonis Website

            I first completely bricked my pycharm virtual environment. Now it either says "no sensor detected" or does not recognize certain parts such as the FPPN_ENABLE etc. in none of my written scripts even the examples from Luxonis don't work... so now the sensor is on niveau even more expensive paperweight.

            My calibration problems:

            • I constantly get different errors.
              The newest one is (I tryed a clean environment in Python 3.9)
              ==> self.aruco_dictionary = cv2.aruco.Dictionary_get(
              AttributeError: module 'cv2.aruco' has no attribute 'Dictionary_get'

            A Forum tells that the aruco Api had changed for certain versions.
            So I should rewrite the Luxonis calibrate.py ???

            I am afraid that if I solve this issue the calibration will not start again and new errors will appear.

            Is it right from Luxonis that when receiving a wrong calibrated sensor and I payed for that I have to invest a huge amount of time in finding out how to calibrate it myself and living with the problem that all future upcoming scripts for it will not work and cost me more time and therefore paying extra 300 €?

            I was at least hoping that they will take back the order and send me a standard, well calibrated version (that has the same components...I even can not think why they think this sensor is unsellable again - it has the same components 😃) + give me back my 300 € for the custom build.

            To be honest: Buying the 12MP camera from somewhere else and soldering it to the board by a friend would have been much cheaper...and was not the reason why I send the sensor back to Luxonis for the "custom build"

            I was very thankful in the beginning with their great support but now I am seriously disappointed, now that I know that the support was not aware of the fact that my sensor was wrongly calibrated and I wasted soo much time, my time and the time of the support.
            Which I am very sorry for.

            I wish I could have used the support and this forum for more productive questions than, "please help me how to use my brocken device and repair it with me......"

            I will of course have to write this into my test review for ToF Sensors for Artists to never send Luxonis a sensor for custom build, even if it has the exact same components as an industrial build from them if you work on a job, even for a job in the future.

            But thank you very, very much for your help.

            5 days later

            Any suggestions @jakaskerl @erik what is wrong to try at least myself to calibrate that sensor anyhow?

            I was getting errors before downgrading to numpy1, for whatever reason there is a requirements installer.

            I am only trying to get the pictures to use the script from @jakaskerl above to get the correct calibration.

              If I write it as the first example I get this beautiful error:

              jakaskerl Thank you very much for your help.

              I maybe misunderstood the workflow.
              I get errors because your script is searching for a calib.json and the photo files.

              That brought me to the misunderstanding that I first have to get the images from a charuco board from different angles so that this script makes the alignment?

                So for example: Could I use this script of mine and modify it?

                [https://www.dropbox.com/scl/fi/gdlxmh68vattwt6uvusv9/test_reversedAlign_old_NDI.py?rlkey=3ehmn3oz5yl1dd3d8bvbb260o&dl=0](https://)

                @jakaskerl Look at my final approach with the Femto Mega and Touchdesigner:

                If I could get Color Stream + Depth Stream + Pointcloud Stream through NDI or Spout perfectly aligned out
                in combination with my scrip that controls everything through OSC this could beat the Femto Mega immediately because Femto Megas image quality is way less worse than the image quality from your 12MP ToF sensor.

                Here what I get in Touchdesigner from the Orbbec Top through network:
                [https://gyazo.com/96490f9625ddef7833c3e4dbb4352896](https://)

                Dear @jakaskerl , Dear @erik , Thank you very much for all your help.

                I want to apologize for my emotional words above. Somehow I was totaly confused and misunderstood completely and thought that the sensor not had been calibrated and was upset that because my lack of knowledge (so my fault) I could not prepare the sensor for the one arts job I planned.

                Now the custom Oak ToF is on stack and I have time to research its possibilities and hope to reach the goal one day.

                I just wanted to say thank you for all your help and patience.
                I really appreciate what Luxonis support does and hope that more people could know about your company and buy your sensors.

                With kind regards
                Bonko