• DepthAI-v2
  • dai.node.Sync: Aligning and syncing 800p depth with 12mp RGB

Hi,

I'm currently working with three OAK-D Pro PoE cameras and I was wondering whether it is possible to:

  1. Capture high(er) resolution RGB frames (e.g. 4k or 12mp) and
  2. Sync those RGB frames with lower resolution stereo depth (e.g. 720p, 800p) using dai.node.Sync and
  3. Align the stereo depth frames with the RGB frames using stereo.setDepthAlign().

I'm currently aware that for RGB alignment, lower resolution depth frames are upscaled to the RGB image size up to 1080p. There exists a Gist for aligning 12mp RGB with depth by limiting the output size of the stereo depth node. However, this script does not use the dai.node.Sync feature for syncing RGB and depth. In my experiments, dai.node.Sync does not seem to produce Message Groups of (rgb, depth) for RGB images larger than 1080p and I'm unsure whether the issue is the alignment or the syncing.

Any help would be highly appreciated!

Thanks!

    aschlieb n my experiments, dai.node.Sync does not seem to produce Message Groups of (rgb, depth) for RGB images larger than 1080p

    What code are you using to sync? I'm pretty sure it should work regardless of the resolution or the aligning.

    Thanks,
    Jaka

    Hi,

    this is the code I use to create the individual Nodes and the Pipeline itself:

    def _create_color_camera_node(
            self, pipeline: dai.Pipeline, mx_id: str
        ) -> tuple[dai.node.ColorCamera, dai.node.XLinkOut]:
            rgb = pipeline.createColorCamera()
            rgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
            if self.rgb_resolution == dai.ColorCameraProperties.SensorResolution.THE_720_P:
                rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
                rgb.setIspScale(2, 3)
            else:
                rgb.setResolution(self.rgb_resolution)
            rgb.setVideoSize(*self.rgb_size)
            rgb.setImageOrientation(self.camera_image_orientation)
            rgb.setInterleaved(False)
    
            if self.fps is not None:
                logging.info(f"Setting FPS of RGB camera: {self.fps} fps")
                rgb.setFps(self.fps)
            else:
                self.fps = rgb.getFps()
    
            self._apply_camera_control(rgb, mx_id)
    
            output_rgb = pipeline.createXLinkOut()
            output_rgb.setStreamName("rgb")
    
            max_size, blocking = (30, True) if self._is_video_capture() else (1, False)
    
            output_rgb.input.setQueueSize(max_size)
            output_rgb.input.setBlocking(blocking)
    
            return rgb, output_rgb
    
        def _create_mono_camera_node(
            self,
            pipeline: dai.Pipeline,
            mx_id: str,
            board_socket: dai.CameraBoardSocket,
            output_name: str,
        ) -> tuple[dai.node.MonoCamera, dai.node.XLinkOut]:
            mono = pipeline.create(dai.node.MonoCamera)
            mono.setBoardSocket(board_socket)
            mono.setResolution(self.mono_resolution)
            if self.fps is not None:
                logging.info(f"Setting FPS of mono {output_name} camera: {self.fps} fps")
                mono.setFps(self.fps)
    
            self._apply_camera_control(mono, mx_id)
    
            output_mono = pipeline.createXLinkOut()
            output_mono.setStreamName(output_name)
    
            return mono, output_mono
    
        def _create_stereo_depth_node(
            self, pipeline: str, mx_id: str
        ) -> tuple[dai.node.StereoDepth, dai.node.XLinkOut]:
            stereo_depth = pipeline.create(dai.node.StereoDepth)
    
            if self.depth_confidence_threshold is not None:
                stereo_depth.initialConfig.setConfidenceThreshold(
                    self.depth_confidence_threshold
                )
            else:
                stereo_depth.setDefaultProfilePreset(self.depth_preset_mode)
    
            stereo_depth.setLeftRightCheck(not self.no_left_right_check)
            stereo_depth.setExtendedDisparity(not self.no_extended_disparity)
            stereo_depth.setSubpixel(self.depth_subpixel)
            stereo_depth.setDepthAlign(self.depth_align)
            stereo_depth.initialConfig.setMedianFilter(self.depth_median_filter)
    
            # Stereo depth postprocessing configuration
            config = stereo_depth.initialConfig.get()
    
            if not self.no_depth_processing:
                max_background_depth = self.cam_configs["height"][mx_id]
                min_object_depth = max_background_depth - self.max_object_height
    
                config.postProcessing.thresholdFilter.minRange = min_object_depth
                config.postProcessing.thresholdFilter.maxRange = max_background_depth
    
                logging.info(
                    f"Min./Max. background depth: {min_object_depth} - {max_background_depth} mm"
                )
    
            stereo_depth.initialConfig.set(config)
    
            stereo_depth.setOutputSize(*self.mono_size)
            logging.info(f"Setting stereo depth output size: {self.mono_size}")
    
            output_depth = pipeline.createXLinkOut()
            output_depth.setStreamName("depth")
    
            return stereo_depth, output_depth
    
        def _get_color_camera_stream(
            self, color_camera: dai.node.ColorCamera
        ) -> dai.Node.Output:
            if self.rgb_raw:
                stream = color_camera.raw
            elif self.rgb_isp:
                stream = color_camera.isp
            else:
                stream = color_camera.video
            return stream
    
        def _create_video_encoder_node(
            self, pipeline: dai.Pipeline, color_camera_node: dai.node.ColorCamera
        ) -> dai.node.VideoEncoder:
            video_encoder = pipeline.createVideoEncoder()
            source_fps = color_camera_node.getFps()
    
            logging.info(f"VideoEncoder frameSize: {self.rgb_size}")
            logging.info(f"VideoEncoder FPS: {color_camera_node.getFps()}")
    
            video_encoder.setDefaultProfilePreset(source_fps, self.encoder)
            return video_encoder
    
        def _create_depth_rgb_sync_node(
            self,
            pipeline: dai.Pipeline,
        ) -> tuple[dai.node.Sync, dai.node.XLinkOut]:
            depth_rgb_sync = pipeline.create(dai.node.Sync)
    
            output_depth_rgb_sync = pipeline.createXLinkOut()
            output_depth_rgb_sync.setStreamName("rgbd")
    
            return depth_rgb_sync, output_depth_rgb_sync
    
        def _create_pipeline(
            self, device_info: dai.DeviceInfo
        ) -> tuple[dai.Pipeline, list[str]]:
            mx_id = device_info.getMxId()
    
            pipeline = dai.Pipeline()
            output_queue_names = []
    
            # ========================================================
    
            # RGB camera node
            if self._is_rgb_capture():
                rgb, output_rgb = self._create_color_camera_node(pipeline, mx_id)
    
            # ========================================================
    
            # Video Encoder node
            if self.encoder:
                video_encoder = self._create_video_encoder_node(
                    pipeline, color_camera_node=rgb
                )
    
            # ========================================================
    
            # Sync node
            if self.sync_rgb_depth:
                depth_rgb_sync, output_depth_rgb_sync = self._create_depth_rgb_sync_node(
                    pipeline
                )
    
            # ========================================================
    
            if self._is_depth_capture():
                # Left mono camera node with output
                mono_left, output_mono_left = self._create_mono_camera_node(
                    pipeline=pipeline,
                    mx_id=mx_id,
                    board_socket=dai.CameraBoardSocket.CAM_B,
                    output_name="left",
                )
    
                # Right mono camera node with output
                mono_right, output_mono_right = self._create_mono_camera_node(
                    pipeline=pipeline,
                    mx_id=mx_id,
                    board_socket=dai.CameraBoardSocket.CAM_C,
                    output_name="right",
                )
    
                # ========================================================
    
                # Stereo depth node
                stereo_depth, output_depth = self._create_stereo_depth_node(pipeline, mx_id)
    
                # ========================================================
    
                # Link left and right mono output to stereo input
                mono_left.out.link(stereo_depth.left)
                mono_right.out.link(stereo_depth.right)
    
                if self.mono_left_right:
                    # Link synced left/right output to input of mono left/right output
                    stereo_depth.syncedLeft.link(output_mono_left.input)
                    stereo_depth.syncedRight.link(output_mono_right.input)
                    output_queue_names += ["left", "right"]
    
                if self.sync_rgb_depth:
                    # Link depth output to depth input of sync node
                    stereo_depth.depth.link(depth_rgb_sync.inputs["depth"])
                else:
                    # Link depth output to input of depth output
                    stereo_depth.depth.link(output_depth.input)
                    output_queue_names.append("depth")
    
                # ========================================================
    
                # Disparity linking
                if self.disparity_depth:
                    output_disparity = pipeline.createXLinkOut()
                    output_disparity.setStreamName("disparity")
    
                    # Link disparity output to input of disparity output
                    stereo_depth.disparity.link(output_disparity.input)
    
                    output_queue_names.append("disparity")
    
                # Rectified Left/Right linking
                if self.rectified_depth:
                    output_rectified_left = pipeline.createXLinkOut()
                    output_rectified_left.setStreamName("rectifiedLeft")
    
                    output_rectified_right = pipeline.createXLinkOut()
                    output_rectified_right.setStreamName("rectifiedRight")
    
                    output_queue_names += ["rectifiedLeft", "rectifiedRight"]
    
                    # Link rectified left/right outputs to their output's inputs
                    stereo_depth.rectifiedLeft.link(output_rectified_left.input)
                    stereo_depth.rectifiedRight.link(output_rectified_right.input)
    
            # ========================================================
    
            # RGB linking
            if self._is_rgb_capture():
    
                def _link_rgbd_sync(rgb_stream: dai.Node.Output) -> None:
                    # Link the RGB data stream to the 'rgb' input of sync node
                    rgb_stream.link(depth_rgb_sync.inputs["rgb"])
                    depth_rgb_sync.out.link(output_depth_rgb_sync.input)
                    output_queue_names.append("rgbd")
    
                def _link_rgb(
                    rgb_stream: dai.Node.Output, output_rgb: dai.node.XLinkOut
                ) -> None:
                    if self.encoder:
                        # Link the RGB data stream to the input of the Encoder
                        rgb_stream.link(video_encoder.input)
                        # Link the encoded bitstream to the RGB output
                        video_encoder.bitstream.link(output_rgb.input)
                    else:
                        # If no encoder is present, link the RGB stream directly to the output
                        rgb_stream.link(output_rgb.input)
                    output_queue_names.append("rgb")
    
                # Get the RGB data stream from the color camera
                rgb_stream = self._get_color_camera_stream(rgb)
    
                if self.sync_rgb_depth:
                    _link_rgbd_sync(rgb_stream)
                else:
                    _link_rgb(rgb_stream, output_rgb)
    
            # ========================================================
    
            return pipeline, output_queue_names

      aschlieb Wah, way to much code.

      Here is how I changed the rgb_align example:

      #!/usr/bin/env python3
      
      import cv2
      import numpy as np
      import depthai as dai
      from datetime import timedelta
      
      # Weights to use when blending depth/rgb image (should equal 1.0)
      rgbWeight = 0.4
      depthWeight = 0.6
      
      
      def updateBlendWeights(percent_rgb):
          """
          Update the rgb and depth weights used to blend depth/rgb image
          @param[in] percent_rgb The rgb weight expressed as a percentage (0..100)
          """
          global depthWeight
          global rgbWeight
          rgbWeight = float(percent_rgb)/100.0
          depthWeight = 1.0 - rgbWeight
      
      
      # Optional. If set (True), the ColorCamera is downscaled from 1080p to 720p.
      # Otherwise (False), the aligned depth is automatically upscaled to 1080p
      downscaleColor = True
      fps = 20
      # The disparity is computed at this resolution, then upscaled to RGB resolution
      monoResolution = dai.MonoCameraProperties.SensorResolution.THE_800_P
      
      # Create pipeline
      pipeline = dai.Pipeline()
      device = dai.Device()
      queueNames = []
      
      # Define sources and outputs
      camRgb = pipeline.create(dai.node.ColorCamera)
      left = pipeline.create(dai.node.MonoCamera)
      right = pipeline.create(dai.node.MonoCamera)
      stereo = pipeline.create(dai.node.StereoDepth)
      
      rgbOut = pipeline.create(dai.node.XLinkOut)
      disparityOut = pipeline.create(dai.node.XLinkOut)
      
      rgbOut.setStreamName("rgb")
      queueNames.append("rgb")
      disparityOut.setStreamName("disp")
      queueNames.append("disp")
      
      #Properties
      camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)
      camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP) # 4056x3040
      
      try:
          calibData = device.readCalibration2()
          lensPosition = calibData.getLensPosition(dai.CameraBoardSocket.RGB)
          if lensPosition:
              camRgb.initialControl.setManualFocus(lensPosition)
      except:
          raise
      left.setResolution(monoResolution)
      left.setBoardSocket(dai.CameraBoardSocket.LEFT)
      left.setFps(fps)
      right.setResolution(monoResolution)
      right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
      right.setFps(fps)
      
      stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
      # LR-check is required for depth alignment
      stereo.setLeftRightCheck(True)
      stereo.setDepthAlign(dai.CameraBoardSocket.RGB)
      # # 4056x3040
      stereo.setOutputSize(1248, 936)
      
      sync = pipeline.create(dai.node.Sync)
      sync.setSyncThreshold(timedelta(milliseconds=20))
      
      # Linking
      camRgb.isp.link(sync.inputs["rgb"])
      
      left.out.link(stereo.left)
      right.out.link(stereo.right)
      stereo.disparity.link(sync.inputs["disparity"])
      
      demux = pipeline.create(dai.node.MessageDemux)
      sync.out.link(demux.input)
      
      demux.outputs["rgb"].link(rgbOut.input)
      demux.outputs["disparity"].link(disparityOut.input)
      
      # Connect to device and start pipeline
      with device:
          device.startPipeline(pipeline)
      
          frameRgb = None
          frameDisp = None
      
          # Configure windows; trackbar adjusts blending ratio of rgb/depth
          rgbWindowName = "rgb"
          depthWindowName = "depth"
          blendedWindowName = "rgb-depth"
          cv2.namedWindow(rgbWindowName)
          cv2.namedWindow(depthWindowName)
          cv2.namedWindow(blendedWindowName)
          cv2.createTrackbar('RGB Weight %', blendedWindowName, int(rgbWeight*100), 100, updateBlendWeights)
      
          while True:
              latestPacket = {}
              latestPacket["rgb"] = None
              latestPacket["disp"] = None
      
              queueEvents = device.getQueueEvents(("rgb", "disp"))
              for queueName in queueEvents:
                  packets = device.getOutputQueue(queueName).tryGetAll()
                  if len(packets) > 0:
                      latestPacket[queueName] = packets[-1]
      
              if latestPacket["rgb"] is not None:
                  frameRgb = latestPacket["rgb"]
                  print(f"RGB device timestamp: {frameRgb.getTimestampDevice()}")
                  print(f"RGB host timestamp: {frameRgb.getTimestamp()}")
                  frameRgb = cv2.resize(frameRgb.getCvFrame(), (1248, 936), interpolation=cv2.INTER_NEAREST)
                  cv2.imshow(rgbWindowName, frameRgb)
      
              if latestPacket["disp"] is not None:
                  frameDisp = latestPacket["disp"]
                  print(f"Disparity device timestamp: {frameDisp.getTimestampDevice()}")
                  print(f"Disparity host timestamp: {frameDisp.getTimestamp()}")
                  frameDisp = latestPacket["disp"].getFrame()
                  maxDisparity = stereo.initialConfig.getMaxDisparity()
                  # Optional, extend range 0..95 -> 0..255, for a better visualisation
                  if 1: frameDisp = (frameDisp * 255. / maxDisparity).astype(np.uint8)
                  # Optional, apply false colorization
                  if 1: frameDisp = cv2.applyColorMap(frameDisp, cv2.COLORMAP_HOT)
                  frameDisp = np.ascontiguousarray(frameDisp)
                  cv2.imshow(depthWindowName, frameDisp)
      
              # Blend when both received
              if frameRgb is not None and frameDisp is not None:
                  # Need to have both frames in BGR format before blending
                  if len(frameDisp.shape) < 3:
                      frameDisp = cv2.cvtColor(frameDisp, cv2.COLOR_GRAY2BGR)
                  blended = cv2.addWeighted(frameRgb, rgbWeight, frameDisp, depthWeight, 0)
                  cv2.imshow(blendedWindowName, blended)
                  frameRgb = None
                  frameDisp = None
      
              if cv2.waitKey(1) == ord('q'):
                  break

      Thanks,
      Jaka

      Hi,

      thanks for your response! Really appreciate your help.

      I do not want to demux the messages. I really do like the MessageGroup object as output of the Sync Node. This is the pattern used in this example from the depthai-python repo.

      Below you find two code blocks using your MODIFIED example code which illustrate the behavior I currently observe in my code. Furthermore, if we don't set the FPS on the ColorCamera, the sequnce numbers won't match in the MessageGroup.

      The working example will print out the (matching) sequence numbers of the Depth and RGB frames inside the MessageGroup which should come from the Sync Node. The non working example never returns any message, meaning tryGetAll always returns the empty list.

      This code block contains the working example with 1080p on the RGB and 720p on the Monos:

      #!/usr/bin/env python3
      
      import depthai as dai
      from datetime import timedelta
      
      fps = 20
      # The disparity is computed at this resolution, then upscaled to RGB resolution
      monoResolution = dai.MonoCameraProperties.SensorResolution.THE_720_P
      #monoResolution = dai.MonoCameraProperties.SensorResolution.THE_800_P
      
      # Create pipeline
      pipeline = dai.Pipeline()
      device = dai.Device()
      
      # Define sources and outputs
      camRgb = pipeline.create(dai.node.ColorCamera)
      left = pipeline.create(dai.node.MonoCamera)
      right = pipeline.create(dai.node.MonoCamera)
      stereo = pipeline.create(dai.node.StereoDepth)
      
      rgbOut = pipeline.create(dai.node.XLinkOut)
      rgbOut.setStreamName("rgb")
      
      #Properties
      camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
      camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
      #camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP) # 4056x3040
      
      camRgb.setFps(fps)
      
      try:
          calibData = device.readCalibration2()
          lensPosition = calibData.getLensPosition(dai.CameraBoardSocket.CAM_A)
          if lensPosition:
              camRgb.initialControl.setManualFocus(lensPosition)
      except:
          raise
      left.setResolution(monoResolution)
      left.setBoardSocket(dai.CameraBoardSocket.CAM_B)
      left.setFps(fps)
      right.setResolution(monoResolution)
      right.setBoardSocket(dai.CameraBoardSocket.CAM_C)
      right.setFps(fps)
      
      stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
      # LR-check is required for depth alignment
      stereo.setLeftRightCheck(True)
      stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
      
      # # 4056x3040
      #stereo.setOutputSize(1248, 936)
      stereo.setOutputSize(1280, 720)
      
      sync = pipeline.create(dai.node.Sync)
      sync.setSyncThreshold(timedelta(milliseconds=20))
      
      # Linking
      camRgb.isp.link(sync.inputs["rgb"])
      
      left.out.link(stereo.left)
      right.out.link(stereo.right)
      stereo.depth.link(sync.inputs["depth"])
      
      sync_out = pipeline.createXLinkOut()
      sync_out.setStreamName("rgbd")
      sync.out.link(sync_out.input)
      
      # Connect to device and start pipeline
      with device:
          device.startPipeline(pipeline)
      
          while True:
              messages = device.getOutputQueue("rgbd").tryGetAll()
              if not messages:
                  continue
              for message_group in messages:
                  for name, frame in message_group:
                      print(f"{name}: {frame.getSequenceNum()}")
                  print("================= End MessageGroup ================")

      The non working example below uses 12MP on the RGB and 800P on the Monos:

      #!/usr/bin/env python3
      
      import depthai as dai
      from datetime import timedelta
      
      fps = 20
      # The disparity is computed at this resolution, then upscaled to RGB resolution
      #monoResolution = dai.MonoCameraProperties.SensorResolution.THE_720_P
      monoResolution = dai.MonoCameraProperties.SensorResolution.THE_800_P
      
      # Create pipeline
      pipeline = dai.Pipeline()
      device = dai.Device()
      
      # Define sources and outputs
      camRgb = pipeline.create(dai.node.ColorCamera)
      left = pipeline.create(dai.node.MonoCamera)
      right = pipeline.create(dai.node.MonoCamera)
      stereo = pipeline.create(dai.node.StereoDepth)
      
      rgbOut = pipeline.create(dai.node.XLinkOut)
      rgbOut.setStreamName("rgb")
      
      #Properties
      camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
      #camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
      camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP) # 4056x3040
      
      camRgb.setFps(fps)
      
      try:
          calibData = device.readCalibration2()
          lensPosition = calibData.getLensPosition(dai.CameraBoardSocket.CAM_A)
          if lensPosition:
              camRgb.initialControl.setManualFocus(lensPosition)
      except:
          raise
      left.setResolution(monoResolution)
      left.setBoardSocket(dai.CameraBoardSocket.CAM_B)
      left.setFps(fps)
      right.setResolution(monoResolution)
      right.setBoardSocket(dai.CameraBoardSocket.CAM_C)
      right.setFps(fps)
      
      stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
      # LR-check is required for depth alignment
      stereo.setLeftRightCheck(True)
      stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
      
      # # 4056x3040
      stereo.setOutputSize(1248, 936)
      #stereo.setOutputSize(1280, 720)
      
      sync = pipeline.create(dai.node.Sync)
      sync.setSyncThreshold(timedelta(milliseconds=20))
      
      # Linking
      camRgb.isp.link(sync.inputs["rgb"])
      
      left.out.link(stereo.left)
      right.out.link(stereo.right)
      stereo.depth.link(sync.inputs["depth"])
      
      sync_out = pipeline.createXLinkOut()
      sync_out.setStreamName("rgbd")
      sync.out.link(sync_out.input)
      
      # Connect to device and start pipeline
      with device:
          device.startPipeline(pipeline)
      
          while True:
              messages = device.getOutputQueue("rgbd").tryGetAll()
              if not messages:
                  continue
              for message_group in messages:
                  for name, frame in message_group:
                      print(f"{name}: {frame.getSequenceNum()}")
                  print("================= End MessageGroup ================")

        Hi aschlieb
        Thanks for pointing it out. Looks like there is some library issue at play since the script works fine when adding a demux node.

        Could you open an issue at luxonis/depthai-python? I will assign one of our devs to it.

        Thanks,
        Jaka

        2 months later

        Dear all,
        I'm facing the same issue while working with an OAK-D W PoE and an OAK-D W with DepthAI Python Library v2.24. In particular, I want to receive the left and center camera frames—the latter is converted to GRAY8 and then both are rectified—along with IMU data. The scripts perfectly works for the center camera at 1080p, but I can't receive any frames when it is set to 12 MP. Indeed, the thread waits forever on DataOutputQueue.get(). This happens while receiving the MessageGroup stream over XLink.

        On the other hand, if I try to publish the same frames over HTTP with the PoE camera by connecting the Sync node to a Script one, I can receive only one MessageGroup successfully. The second HTTP request halts at DataOutputQueue.get() forever.

        Do you have any hints?

        Here the debug log with the schema dump:

        Connected to pydev debugger (build 233.14475.56)
        [1944301061195C2700] [169.254.1.222] [1709114955.659] [host] [debug] Schema dump: {"connections":[{"node1Id":1,"node1Output":"isp","node1OutputGroup":"","node2Id":2,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":0,"node1Output":"out","node1OutputGroup":"","node2Id":4,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":2,"node1Output":"out","node1OutputGroup":"","node2Id":5,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":3,"node1Output":"out","node1OutputGroup":"","node2Id":6,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":6,"node1Output":"out","node1OutputGroup":"","node2Id":8,"node2Input":"right","node2InputGroup":"inputs"},{"node1Id":4,"node1Output":"out","node1OutputGroup":"","node2Id":8,"node2Input":"left","node2InputGroup":"inputs"},{"node1Id":5,"node1Output":"out","node1OutputGroup":"","node2Id":8,"node2Input":"center","node2InputGroup":"inputs"},{"node1Id":7,"node1Output":"out","node1OutputGroup":"","node2Id":8,"node2Input":"imu","node2InputGroup":"inputs"},{"node1Id":8,"node1Output":"out","node1OutputGroup":"","node2Id":9,"node2Input":"camera","node2InputGroup":"io"},{"node1Id":7,"node1Output":"out","node1OutputGroup":"","node2Id":9,"node2Input":"imu","node2InputGroup":"io"}],"globalProperties":{"calibData":null,"cameraTuningBlobSize":null,"cameraTuningBlobUri":"","leonCssFrequencyHz":700000000.0,"leonMssFrequencyHz":700000000.0,"pipelineName":null,"pipelineVersion":null,"sippBufferSize":18432,"sippDmaBufferSize":16384,"xlinkChunkSize":-1},"nodes":[[0,{"id":0,"ioInfo":[[["","inputControl"],{"blocking":true,"group":"","id":1,"name":"inputControl","queueSize":8,"type":3,"waitForMessage":false}],[["","out"],{"blocking":false,"group":"","id":2,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","raw"],{"blocking":false,"group":"","id":3,"name":"raw","queueSize":8,"type":0,"waitForMessage":false}],[["","frameEvent"],{"blocking":false,"group":"","id":4,"name":"frameEvent","queueSize":8,"type":0,"waitForMessage":false}]],"name":"MonoCamera","properties":[185,10,185,30,129,0,8,3,0,0,0,185,3,0,0,0,185,5,0,0,0,0,0,185,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,185,3,0,0,0,185,3,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,189,0,255,1,136,0,0,192,65,0,3,3,190]}],[1,{"id":1,"ioInfo":[[["","inputConfig"],{"blocking":false,"group":"","id":5,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":false}],[["","raw"],{"blocking":false,"group":"","id":10,"name":"raw","queueSize":8,"type":0,"waitForMessage":false}],[["","still"],{"blocking":false,"group":"","id":11,"name":"still","queueSize":8,"type":0,"waitForMessage":false}],[["","inputControl"],{"blocking":true,"group":"","id":6,"name":"inputControl","queueSize":8,"type":3,"waitForMessage":false}],[["","video"],{"blocking":false,"group":"","id":7,"name":"video","queueSize":8,"type":0,"waitForMessage":false}],[["","isp"],{"blocking":false,"group":"","id":8,"name":"isp","queueSize":8,"type":0,"waitForMessage":false}],[["","preview"],{"blocking":false,"group":"","id":9,"name":"preview","queueSize":8,"type":0,"waitForMessage":false}],[["","frameEvent"],{"blocking":false,"group":"","id":12,"name":"frameEvent","queueSize":8,"type":0,"waitForMessage":false}]],"name":"ColorCamera","properties":[185,26,185,30,129,0,8,3,0,0,0,185,3,0,0,0,185,5,0,0,0,0,0,185,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,185,3,0,0,0,185,3,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,189,0,255,0,1,0,129,44,1,129,44,1,255,255,255,255,2,136,0,0,192,65,0,136,0,0,128,191,136,0,0,128,191,1,185,4,0,0,0,0,3,3,4,4,4,190]}],[2,{"id":2,"ioInfo":[[["","inputConfig"],{"blocking":true,"group":"","id":13,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":false}],[["","inputImage"],{"blocking":true,"group":"","id":14,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}],[["","out"],{"blocking":false,"group":"","id":15,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"ImageManip","properties":[185,6,185,9,185,7,185,4,136,40,67,129,58,136,0,0,0,0,136,94,191,127,63,136,0,0,128,63,185,3,185,2,136,0,0,0,0,136,0,0,0,0,185,2,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,0,136,0,0,128,63,136,0,0,128,63,0,1,185,15,0,0,0,0,0,0,186,0,1,0,186,0,0,0,136,0,0,0,0,0,1,185,6,30,0,0,0,0,133,255,0,1,0,1,0,0,255,134,0,198,187,0,4,0,0,189,0]}],[3,{"id":3,"ioInfo":[[["","inputControl"],{"blocking":true,"group":"","id":16,"name":"inputControl","queueSize":8,"type":3,"waitForMessage":false}],[["","out"],{"blocking":false,"group":"","id":17,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","raw"],{"blocking":false,"group":"","id":18,"name":"raw","queueSize":8,"type":0,"waitForMessage":false}],[["","frameEvent"],{"blocking":false,"group":"","id":19,"name":"frameEvent","queueSize":8,"type":0,"waitForMessage":false}]],"name":"MonoCamera","properties":[185,10,185,30,129,0,8,3,0,0,0,185,3,0,0,0,185,5,0,0,0,0,0,185,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,185,3,0,0,0,185,3,0,0,0,0,2,0,0,0,0,0,0,0,0,0,2,189,0,255,1,136,0,0,192,65,0,3,3,190]}],[4,{"id":4,"ioInfo":[[["","inputImage"],{"blocking":true,"group":"","id":20,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}],[["","out"],{"blocking":false,"group":"","id":21,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"Warp","properties":[185,9,0,0,134,0,160,15,0,4,80,50,189,10,97,115,115,101,116,58,109,101,115,104,188,0,255]}],[5,{"id":5,"ioInfo":[[["","inputImage"],{"blocking":true,"group":"","id":22,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}],[["","out"],{"blocking":false,"group":"","id":23,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"Warp","properties":[185,9,0,0,134,0,198,187,0,4,133,253,0,133,190,0,189,10,97,115,115,101,116,58,109,101,115,104,188,0,255]}],[6,{"id":6,"ioInfo":[[["","inputImage"],{"blocking":true,"group":"","id":24,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}],[["","out"],{"blocking":false,"group":"","id":25,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"Warp","properties":[185,9,0,0,134,0,160,15,0,4,80,50,189,10,97,115,115,101,116,58,109,101,115,104,188,0,255]}],[7,{"id":7,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":26,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"IMU","properties":[185,4,186,4,185,5,0,0,0,50,4,185,5,0,0,0,50,5,185,5,0,0,0,50,2,185,5,0,0,0,50,3,1,5,0]}],[8,{"id":8,"ioInfo":[[["inputs","left"],{"blocking":true,"group":"inputs","id":27,"name":"left","queueSize":8,"type":3,"waitForMessage":false}],[["inputs","center"],{"blocking":true,"group":"inputs","id":28,"name":"center","queueSize":8,"type":3,"waitForMessage":false}],[["inputs","right"],{"blocking":true,"group":"inputs","id":29,"name":"right","queueSize":8,"type":3,"waitForMessage":false}],[["inputs","imu"],{"blocking":true,"group":"inputs","id":30,"name":"imu","queueSize":8,"type":3,"waitForMessage":false}],[["","out"],{"blocking":false,"group":"","id":31,"name":"out","queueSize":8,"type":0,"waitForMessage":false}]],"name":"Sync","properties":[185,2,130,128,240,250,2,255]}],[9,{"id":9,"ioInfo":[[["io","camera"],{"blocking":false,"group":"io","id":32,"name":"camera","queueSize":1,"type":3,"waitForMessage":true}],[["io","imu"],{"blocking":true,"group":"io","id":33,"name":"imu","queueSize":128,"type":3,"waitForMessage":true}]],"name":"Script","properties":[185,3,189,14,97,115,115,101,116,58,95,95,115,99,114,105,112,116,189,8,60,115,99,114,105,112,116,62,0]}]]}
        [1944301061195C2700] [169.254.1.222] [1709114955.659] [host] [debug] Asset map dump: {"map":{"/node/4/mesh":{"alignment":64,"offset":0,"size":32000},"/node/5/mesh":{"alignment":64,"offset":32000,"size":386080},"/node/6/mesh":{"alignment":64,"offset":418112,"size":32000},"/node/9/__script":{"alignment":64,"offset":450112,"size":10908}}}
        [1944301061195C2700] [169.254.1.222] [5.365] [MonoCamera(0)] [info] Using board socket: 1, id: 1
        [1944301061195C2700] [169.254.1.222] [5.365] [MonoCamera(3)] [info] Using board socket: 2, id: 2
        [1944301061195C2700] [169.254.1.222] [5.367] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '18432'B, DMA buffer size: '16384'B
        [1944301061195C2700] [169.254.1.222] [5.402] [system] [info] ImageManip internal buffer size '381952'B, shave buffer size '35840'B
        [1944301061195C2700] [169.254.1.222] [5.403] [system] [info] ColorCamera allocated resources: no shaves; cmx slices: [10-15] 
        MonoCamera allocated resources: no shaves; cmx slices: [13-15] 
        ImageManip allocated resources: shaves: [15-15] no cmx slices. 
        
        [1944301061195C2700] [169.254.1.222] [5.854] [IMU(7)] [info] IMU product ID:
        [1944301061195C2700] [169.254.1.222] [5.854] [IMU(7)] [info] Part 10004563 : Version 3.9.9 Build 2
        [1944301061195C2700] [169.254.1.222] [5.854] [IMU(7)] [info] Part 10003606 : Version 1.8.0 Build 338
        [1944301061195C2700] [169.254.1.222] [5.854] [IMU(7)] [info] Part 10004135 : Version 5.5.3 Build 162
        [1944301061195C2700] [169.254.1.222] [5.854] [IMU(7)] [info] Part 10004149 : Version 5.1.12 Build 183
        Pipeline started.
        [1944301061195C2700] [169.254.1.222] [6.055] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 67.05 / 82.56 MiB, LeonRT Heap: 4.42 / 40.68 MiB / NOC ddr: 41 MB/s
        [1944301061195C2700] [169.254.1.222] [6.055] [system] [info] Temperatures - Average: 47.29C, CSS: 48.92C, MSS 46.90C, UPA: 45.77C, DSS: 47.58C
        [1944301061195C2700] [169.254.1.222] [6.055] [system] [info] Cpu Usage - LeonOS 58.70%, LeonRT: 6.39%
        [1944301061195C2700] [169.254.1.222] [6.327] [IMU(7)] [info] IMU product ID:
        [1944301061195C2700] [169.254.1.222] [6.327] [IMU(7)] [info] Part 10004563 : Version 3.9.9 Build 2
        [1944301061195C2700] [169.254.1.222] [6.327] [IMU(7)] [info] Part 10003606 : Version 1.8.0 Build 338
        [1944301061195C2700] [169.254.1.222] [6.327] [IMU(7)] [info] Part 10004135 : Version 5.5.3 Build 162
        [1944301061195C2700] [169.254.1.222] [6.327] [IMU(7)] [info] Part 10004149 : Version 5.1.12 Build 183
        [1944301061195C2700] [169.254.1.222] [6.525] [Warp(5)] [error] Stereo alignment error: 1, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.583] [Warp(5)] [error] Stereo alignment error: 2, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.648] [Warp(5)] [error] Stereo alignment error: 3, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.701] [Warp(5)] [error] Stereo alignment error: 4, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.768] [Warp(4)] [error] Stereo alignment error: 5, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.801] [Warp(4)] [error] Stereo alignment error: 6, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.834] [Warp(4)] [error] Stereo alignment error: 7, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.885] [Warp(4)] [error] Stereo alignment error: 8, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.918] [Warp(4)] [error] Stereo alignment error: 9, trying to recover.
        [1944301061195C2700] [169.254.1.222] [6.952] [Warp(6)] [error] Stereo alignment error: 10, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.051] [Warp(5)] [error] Stereo alignment error: 11, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.056] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 68.93 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 586 MB/s
        [1944301061195C2700] [169.254.1.222] [7.056] [system] [info] Temperatures - Average: 49.70C, CSS: 51.58C, MSS 48.25C, UPA: 49.37C, DSS: 49.59C
        [1944301061195C2700] [169.254.1.222] [7.056] [system] [info] Cpu Usage - LeonOS 100.00%, LeonRT: 34.27%
        [1944301061195C2700] [169.254.1.222] [7.107] [Warp(5)] [error] Stereo alignment error: 12, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.153] [Warp(5)] [error] Stereo alignment error: 13, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.193] [Warp(4)] [error] Stereo alignment error: 14, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.226] [Warp(4)] [error] Stereo alignment error: 15, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.260] [Warp(6)] [error] Stereo alignment error: 16, trying to recover.
        [1944301061195C2700] [169.254.1.222] [7.293] [Warp(6)] [error] Stereo alignment error: 17, trying to recover.
        [1944301061195C2700] [169.254.1.222] [8.059] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 70.45 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 2079 MB/s
        [1944301061195C2700] [169.254.1.222] [8.059] [system] [info] Temperatures - Average: 51.02C, CSS: 52.90C, MSS 50.70C, UPA: 49.59C, DSS: 50.92C
        [1944301061195C2700] [169.254.1.222] [8.059] [system] [info] Cpu Usage - LeonOS 100.00%, LeonRT: 12.01%
        [1944301061195C2700] [169.254.1.222] [8.407] [Script(9)] [warning] Camera server: binding to http://:25000.
        [1944301061195C2700] [169.254.1.222] [9.060] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.12 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 1689 MB/s
        [1944301061195C2700] [169.254.1.222] [9.060] [system] [info] Temperatures - Average: 48.41C, CSS: 50.47C, MSS 47.58C, UPA: 47.80C, DSS: 47.80C
        [1944301061195C2700] [169.254.1.222] [9.060] [system] [info] Cpu Usage - LeonOS 55.13%, LeonRT: 2.80%
        [1944301061195C2700] [169.254.1.222] [10.061] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.12 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 464 MB/s
        [1944301061195C2700] [169.254.1.222] [10.061] [system] [info] Temperatures - Average: 48.13C, CSS: 50.25C, MSS 47.80C, UPA: 47.35C, DSS: 47.13C
        [1944301061195C2700] [169.254.1.222] [10.061] [system] [info] Cpu Usage - LeonOS 26.05%, LeonRT: 1.10%
        [1944301061195C2700] [169.254.1.222] [10.801] [Script(9)] [warning] Camera server: Waiting... (/frame)
        [1944301061195C2700] [169.254.1.222] [10.802] [Script(9)] [warning] Camera server: Ok!
        [1944301061195C2700] [169.254.1.222] [11.062] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [11.062] [system] [info] Temperatures - Average: 48.47C, CSS: 50.03C, MSS 47.35C, UPA: 48.03C, DSS: 48.47C
        [1944301061195C2700] [169.254.1.222] [11.062] [system] [info] Cpu Usage - LeonOS 38.82%, LeonRT: 1.70%
        [1944301061195C2700] [169.254.1.222] [12.063] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 673 MB/s
        [1944301061195C2700] [169.254.1.222] [12.063] [system] [info] Temperatures - Average: 48.14C, CSS: 49.81C, MSS 47.13C, UPA: 47.58C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [12.063] [system] [info] Cpu Usage - LeonOS 25.99%, LeonRT: 1.11%
        [1944301061195C2700] [169.254.1.222] [13.064] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [13.064] [system] [info] Temperatures - Average: 47.97C, CSS: 49.37C, MSS 47.35C, UPA: 47.13C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [13.064] [system] [info] Cpu Usage - LeonOS 26.07%, LeonRT: 1.09%
        [1944301061195C2700] [169.254.1.222] [14.065] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 464 MB/s
        [1944301061195C2700] [169.254.1.222] [14.065] [system] [info] Temperatures - Average: 48.14C, CSS: 49.81C, MSS 47.13C, UPA: 47.58C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [14.065] [system] [info] Cpu Usage - LeonOS 26.45%, LeonRT: 1.11%
        [1944301061195C2700] [169.254.1.222] [14.760] [Script(9)] [warning] Camera server: Waiting... (/frame)
        [1944301061195C2700] [169.254.1.222] [15.066] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 464 MB/s
        [1944301061195C2700] [169.254.1.222] [15.066] [system] [info] Temperatures - Average: 48.30C, CSS: 49.59C, MSS 48.03C, UPA: 47.58C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [15.066] [system] [info] Cpu Usage - LeonOS 26.90%, LeonRT: 1.09%
        [1944301061195C2700] [169.254.1.222] [16.067] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 465 MB/s
        [1944301061195C2700] [169.254.1.222] [16.067] [system] [info] Temperatures - Average: 48.08C, CSS: 49.14C, MSS 47.35C, UPA: 47.80C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [16.067] [system] [info] Cpu Usage - LeonOS 26.32%, LeonRT: 1.11%
        [1944301061195C2700] [169.254.1.222] [17.068] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [17.068] [system] [info] Temperatures - Average: 48.14C, CSS: 49.59C, MSS 47.35C, UPA: 47.58C, DSS: 48.03C
        [1944301061195C2700] [169.254.1.222] [17.068] [system] [info] Cpu Usage - LeonOS 25.87%, LeonRT: 1.09%
        [1944301061195C2700] [169.254.1.222] [18.069] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [18.069] [system] [info] Temperatures - Average: 47.69C, CSS: 48.92C, MSS 46.90C, UPA: 47.13C, DSS: 47.80C
        [1944301061195C2700] [169.254.1.222] [18.069] [system] [info] Cpu Usage - LeonOS 25.98%, LeonRT: 1.11%
        [1944301061195C2700] [169.254.1.222] [19.070] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [19.070] [system] [info] Temperatures - Average: 47.80C, CSS: 49.37C, MSS 46.90C, UPA: 47.13C, DSS: 47.80C
        [1944301061195C2700] [169.254.1.222] [19.070] [system] [info] Cpu Usage - LeonOS 25.90%, LeonRT: 1.09%
        [1944301061195C2700] [169.254.1.222] [20.071] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 464 MB/s
        [1944301061195C2700] [169.254.1.222] [20.071] [system] [info] Temperatures - Average: 48.30C, CSS: 49.37C, MSS 47.58C, UPA: 47.80C, DSS: 48.47C
        [1944301061195C2700] [169.254.1.222] [20.071] [system] [info] Cpu Usage - LeonOS 25.96%, LeonRT: 1.11%
        [1944301061195C2700] [169.254.1.222] [21.072] [system] [info] Memory Usage - DDR: 212.53 / 333.46 MiB, CMX: 2.50 / 2.50 MiB, LeonOS Heap: 71.13 / 82.56 MiB, LeonRT Heap: 10.57 / 40.68 MiB / NOC ddr: 463 MB/s
        [1944301061195C2700] [169.254.1.222] [21.072] [system] [info] Temperatures - Average: 48.08C, CSS: 49.81C, MSS 46.90C, UPA: 47.35C, DSS: 48.25C
        [1944301061195C2700] [169.254.1.222] [21.072] [system] [info] Cpu Usage - LeonOS 26.32%, LeonRT: 1.09%
        [1944301061195C2700] [169.254.1.222] [1709114972.369] [host] [debug] Device about to be closed...
        [1944301061195C2700] [169.254.1.222] [1709114972.379] [host] [debug] Shutdown OK
        [1944301061195C2700] [169.254.1.222] [1709114972.381] [host] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
        [1944301061195C2700] [169.254.1.222] [1709114972.381] [host] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
        [1944301061195C2700] [169.254.1.222] [1709114975.552] [host] [debug] Device closed, 3183

        Hi @MatteoL
        Perhaps the threshold is set too low? Try setting the setSyncAttempts to 0, so the node forwards the frames as soon as they are received.

        Thanks,
        Jaka

        No MessageGroups arrive even though I link only the central camera the Sync node, namely if I sync only one stream. Nonetheless, it does work in the same case if I reduce the resolution to 1080p.

        I regret not sharing more info on the problem, but nothing helpful is logged by the device, even at the trace level.

        Hi @MatteoL
        Can you create a MRE of the issue in a separate discussion please?

        Thanks,
        Jaka

        As you wish, but I would simply copy the code from aschlieb in a new thread as the issue is exactly the same. I am actually answering negatively to jakaskerl since the problem has not been solved, as per DepthAI Python Library v2.24.

        Have you tried receiving a MessageGroup holding a 12MP frame from a Oak-D camera via XLink, or getting more than one of them in a script node running on a Oak-D PoE? If you actually believe that the issue has been solved, could you share some code that actually works?

        @MatteoL
        I can confirm that the code sent above (the one that didn't work when I tried on Jan 7th) now works.
        I'm using the latest develop branch depthai version luxonis/depthai-pythontree/develop .

        It hasn't been merged into main afaik, but should be in the next release (this week).

        Thanks,
        Jaka