Luxonis-Alex So I installed depthai-python on develop in a fresh virtual environment. I still get the error

And garbage in the ToF. Did I do something wrong ? I just cloned depthai-python and followed the readme to build it

    apirrone Sorry I posted too fast, I forgot to checkout develop 🙂 I'm reinstalling it and testing again

    Ok, so now I'm really on develop, the hash matches the one you sent 2.28.0.0.dev0+76294530fd0bf91b09469490a1d14871c374630a

    And I still have the same error

    I reinstalled everything from scratch, and I don't get the error anymore, must have made a mistake the time before.

    Now I have a different problem, the script freezes after a few frames if I try to get a rgb cam and the tof at the same time. If I comment out the rgb queue get part, the for works fine. But I have to comment everything about the ToF (starting at the tof node creation) for the rgb to work.

    Here is a MRE :

    #!/usr/bin/env python3
    
    import time
    import cv2
    import depthai as dai
    import numpy as np
    
    print(dai.__version__)
    
    cvColorMap = cv2.applyColorMap(np.arange(256, dtype=np.uint8), cv2.COLORMAP_JET)
    cvColorMap[0] = [0, 0, 0]
    
    
    def create_pipeline():
        pipeline = dai.Pipeline()
    
        tof = pipeline.create(dai.node.ToF)
    
        # Configure the ToF node
        tofConfig = tof.initialConfig.get()
    
        # Optional. Best accuracy, but adds motion blur.
        # see ToF node docs on how to reduce/eliminate motion blur.
        tofConfig.enableOpticalCorrection = False
        tofConfig.enablePhaseShuffleTemporalFilter = True
        tofConfig.phaseUnwrappingLevel = 4
        tofConfig.phaseUnwrapErrorThreshold = 300
        tofConfig.enableTemperatureCorrection = False  # Not yet supported
    
        xinTofConfig = pipeline.create(dai.node.XLinkIn)
        xinTofConfig.setStreamName("tofConfig")
        xinTofConfig.out.link(tof.inputConfig)
    
        tof.initialConfig.set(tofConfig)
    
        cam_tof = pipeline.create(dai.node.Camera)
        cam_tof.setFps(30)  # ToF node will produce depth frames at /2 of this rate
        cam_tof.setBoardSocket(dai.CameraBoardSocket.CAM_D)
        cam_tof.raw.link(tof.input)
    
        xout = pipeline.create(dai.node.XLinkOut)
        xout.setStreamName("depth")
        tof.depth.link(xout.input)
    
        tofConfig = tof.initialConfig.get()
    
        left = pipeline.create(dai.node.ColorCamera)
        left.setBoardSocket(dai.CameraBoardSocket.CAM_C)
        left.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1440X1080)
        left.setFps(30)
        left.setIspScale(2, 3)
    
        left_xout = pipeline.create(dai.node.XLinkOut)
        left_xout.setStreamName("left")
        left.video.link(left_xout.input)
    
        return pipeline, tofConfig
    
    
    if __name__ == "__main__":
        pipeline, tofConfig = create_pipeline()
    
        with dai.Device(pipeline) as device:
            print("Connected cameras:", device.getConnectedCameraFeatures())
    
            # qDepth = device.getOutputQueue(name="depth")
            qDepth = device.getOutputQueue(name="depth", maxSize=8, blocking=False)
    
            # left_q = device.getOutputQueue(name="left")
            left_q = device.getOutputQueue(name="left", maxSize=8, blocking=False)
    
            while True:
                start = time.time()
    
                imgFrame = qDepth.get()  # blocking call, will wait until a new data has arrived
                depth_map = imgFrame.getFrame()
    
                max_depth = (tofConfig.phaseUnwrappingLevel + 1) * 1500  # 100MHz modulation freq.
                depth_colorized = np.interp(depth_map, (0, max_depth), (0, 255)).astype(np.uint8)
                depth_colorized = cv2.applyColorMap(depth_colorized, cvColorMap)
    
                # If I comment that (3 next lines), the tof works fine
                in_left = left_q.get()
                left_im = in_left.getCvFrame()
                cv2.imshow("left", left_im)
                # Until that
    
                cv2.imshow("Colorized depth", depth_colorized)
                key = cv2.waitKey(1)
    
        device.close()

    @apirrone That's great that the I/O error is not showing up, but have been something with the build (maybe submodules not updated). Just mentioning, a way to auto-update submodules and install our CI prebuilts (if available) is to run from depthai-python:
    python3 examples/install_requirements.py
    Or specifically a version:
    python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local depthai==2.28.0.0.dev0+76294530fd0bf91b09469490a1d14871c374630a

    Now for the freeze issue you're observing, I noticed the same myself when the extra 5V power was not applied, IMX296 started streaming and after a few frames it froze, when ToF started. Was fixed with the DC jack, but I also have only a single IMX296 RPi GSCam module locally to test with (so not sure if it could still be a problem with 2x IMX296 + ToF):

    With all 3 cameras attached, can you check if enabling just a few of them is fine, for example with cam_test:
    python3 utilities/cam_test.py -cams camb,c camc,c
    python3 utilities/cam_test.py -cams camb,c camd,t
    python3 utilities/cam_test.py -cams camc,c camd,t
    python3 utilities/cam_test.py -cams camb,c camc,c camd,t

    Hi @Luxonis-Alex

    Plugging the 5V external power supply did not change anything. I also tried unplugging one of the IMX296, same behavior, the whole pipeline freezes after a few frames (like 5 frames).

    No problem getting the two IMX296 alone, and the ToF alone.

    @apirrone We can try to replicate the problem, can you check the revisions of all boards (the FFC-4P DD2090 RxMx.., RPi camera connector adapter, ToF module...).
    A picture of your setup would also be helpful. Thanks!

    can you check the revisions of all boards (the FFC-4P DD2090 RxMx.., RPi camera connector adapter, ToF module...).

    Can you guide me on how to get all this information ? Using device_manager.py I get that

    Meanwhile, here are some photos of my setup


    Also, not exactly related question, but in order to use the align node, I know the tof needs to be aligned with the rgb camera horizontally, but how about in the z direction ? (away from the camera) Is it critical that their focal planes are aligned for example ? If so, how do I know how to align them in this direction ?
    Thanks !

    @apirrone We'll follow up on this question about alignment.

    For the issue with IMX296 RPi GS Cam not working together with the ToF, would you be able to try a rework on the SL6996 R1M1E1 cable adapter boards, it would be replacing this 10K resistor (the populated one) with a 1K resistor. Or if easier, just soldering a 1K resistor in parallel with the existing 10K one (on top of it), final value isn't that important:

    Design files of that adapter: luxonis/depthai-hardwaretree/master/SL6996_OAK-FFC_15pin-RPi

      It was hard, but I managed to solder a 1k resistor this way 🙂

      And ... it works ! I no longer have the freeze ! I only tested with one camera for now, I'll try soldering another resistor to the other board to see how it behaves with the two cameras

      Thanks @Luxonis-Alex ! Can you explain what was going on ?

        jakaskerl

        So If I understand correctly, I should set the camera extrinsics as follows for example ? :

        right->left
        left->tof
        tof->-1

        I get an error if I set -1 or "-1" for toCameraSocket

        Traceback (most recent call last):
          File "/home/antoine/Pollen/pollen-vision/pollen_vision/pollen_vision/camera_wrappers/depthai/calibration/flash.py", line 31, in <module>
            w.flash(args.calib_json_file, tof=args.tof)
          File "/home/antoine/Pollen/pollen-vision/pollen_vision/pollen_vision/camera_wrappers/depthai/wrapper.py", line 312, in flash
            ch.setCameraExtrinsics(
        TypeError: setCameraExtrinsics(): incompatible function arguments. The following argument types are supported:
            1. (self: depthai.CalibrationHandler, srcCameraId: depthai.CameraBoardSocket, destCameraId: depthai.CameraBoardSocket, rotationMatrix: list[list[float]], translation: list[float], specTranslation: list[float] = [0.0, 0.0, 0.0]) -> None
        
        Invoked with: <depthai.CalibrationHandler object at 0x7e4bb400a5f0>, <CameraBoardSocket.CAM_D: 3>, '-1', [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [-3.2509, 0.0, 0.0]; kwargs: specTranslation=[-3.2509, 0.0, 0.0]

        So, should it look like this ?

        ch.setCameraExtrinsics(
            left_socket,
            tof_socket,
            R_left_to_tof,
            T_left_to_tof,
            specTranslation=T_left_to_tof,
        )
        
        ch.setCameraExtrinsics(
            right_socket,
            left_socket,
            R_right_to_left,
            T_right_to_left,
            specTranslation=T_right_to_left,
        )
        
        # Should I do this ?
        ch.setCameraExtrinsics(
            tof_socket,
            tof_socket,
            [0, 0, 0],
            np.eye(3).tolist(),
            specTranslation=[0, 0, 0],
        )

          jakaskerl Make sure the .json in properly flashed. I find that using your config the camera on socket 3 would not get flashed, if I didn't specify the width and height.

          Indeed, I have to set camera intrinsics with width and height, then the align node works !

          tof-2024-08-27-144757.mp4
          6MB

          As you can see, it's not very well aligned for now, that's because I set the tof's K matrix to this :

                  if tof:
                      K = np.eye(3)
                      K[0][0] = 640  # fx
                      K[1][1] = 640  # fy
                      K[0][2] = 320  # cx
                      K[1][2] = 240  # cy
                      tof_socket = get_socket_from_name("tof", self.cam_config.name_to_socket)
                      ch.setCameraIntrinsics(tof_socket, K.tolist(), (640, 480))

          How can I get the real intrinsic matrix of the ToF ? I guess it's flashed in it's own eeprom ?

            Also, we are using very wide FOV lenses, as you can see below

            Is there an easy way to get the depth in full resolution and the relevant aligned rgb section cropped ? So that I have two 640x480 (depth and rgb) that are aligned ?

              apirrone

              apirrone I get an error if I set -1 or "-1" for toCameraSocket

              You have to set it to CameraBoardSocket.AUTO instead of CameraBoardSocket.CAM_X. The enum value is -1.

              apirrone How can I get the real intrinsic matrix of the ToF ? I guess it's flashed in it's own eeprom ?

              Yes, the intrinsics are flashed in the module eeprom.

              apirrone Is there an easy way to get the depth in full resolution and the relevant aligned rgb section cropped ? So that I have two 640x480 (depth and rgb) that are aligned ?

              Undistort the RGB image first. More on this here.

              Thanks,
              Jaka

                jakaskerl Yes, the intrinsics are flashed in the module eeprom.

                So no way to get them ? Do I need them to align the tof to RGB ?

                I needed to set the camera intrinsics in the calibration handler for it to flash properly, but you said I had to set the size of the image, is there a way to set the image size without setting the intrinsics ?

                How do you flash the calibration for the SR-PoE-ToF ?

                jakaskerl You have to set it to CameraBoardSocket.AUTO instead of CameraBoardSocket.CAM_X. The enum value is -1.

                In the end I set tof->left, left->tof and left->right, this satisfies this :

                jakaskerl No two sockets can have the same toCameraSocket value

                And seems to be enough

                jakaskerl Undistort the RGB image first. More on this here.

                If I undistort the RGB image first, this is what I get :

                No undistort :

                Undistort: