Hello there,

I'm currently try to setup the OAK-D SR PoE in a basic setup and am running into same issues,
maybe someone here can help me out.

When running the following example, I do receive some image data, although there seems to be some missalignment.
Can someone point out what the problem might be, as the image seems good when using depthai-viewer.

Thanks in advance!

depthai version 2.29.0.0
Bootloader 0.0.28


if __name__ == '__main__':

    pipeline = dai.Pipeline()

    # First, we want the Color camera as the output
    cam_rgb = pipeline.createColorCamera()

    # 300x300 will be the preview frame size, available as 'preview' output of the node
    cam_rgb.setPreviewSize(300, 300)
    cam_rgb.setInterleaved(False)
    cam_rgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

    detection_nn = pipeline.createMobileNetDetectionNetwork()
    # Blob is the Neural Network file, compiled for MyriadX.
    # It contains both the definition and weights of the model
    # We're using a blobconverter tool to retrieve the MobileNetSSD blob
    # automatically from OpenVINO Model Zoo
    detection_nn.setBlobPath(
        blobconverter.from_zoo(name='mobilenet-ssd', shaves=6),
    )

    # Next, we filter out the detections that are below a confidence threshold.
    # Confidence can be anywhere between <0..1>
    detection_nn.setConfidenceThreshold(0.5)

    # XLinkOut is a "way out" from the device. Any data you want to transfer
    # to host need to be send via XLink
    x_out_rgb = pipeline.createXLinkOut()
    x_out_rgb.setStreamName("rgb")

    x_out_nn = pipeline.createXLinkOut()
    x_out_nn.setStreamName("nn")

    cam_rgb.preview.link(x_out_rgb.input)
    cam_rgb.preview.link(detection_nn.input)
    detection_nn.out.link(x_out_nn.input)

    # Pipeline is now finished, and we need to find an available device to run
    # our pipeline. We are using context manager here that will dispose the
    # device after we stop using it
    with dai.Device(pipeline) as device:
        # From this point, the Device will be in "running" mode and will start
        # sending data via XLink

        # To consume the device results, we get two output queues from the
        # device, with stream names we assigned earlier
        q_rgb = device.getOutputQueue(name="rgb")
        q_nn = device.getOutputQueue("nn")
        # Here, some of the default values are defined. Frame will be an image
        # from "rgb" stream, detections will contain nn results
        frame = None
        detections = []

        # Since the detections returned by nn have values from <0..1> range,
        # they need to be multiplied by frame width/height to receive the
        # actual position of the bounding box on the image
        def frameNorm(frame, bbox):
            normVals = np.full(len(bbox), frame.shape[0])
            normVals[::2] = frame.shape[1]
            return (np.clip(np.array(bbox), 0, 1) * normVals).astype(int)

        while True:
            # we try to fetch the data from nn/rgb queues. tryGet will return
            # either the data packet or None if there isn't any
            in_rgb = q_rgb.get()
            in_nn = q_nn.get()

            if in_rgb is not None:
                # If the packet from RGB camera is present, we're retrieving
                # the frame in OpenCV format using getCvFrame
                frame = in_rgb.getCvFrame()  # type: ignore
                cv2.imshow("preview", frame)

            if in_nn is not None:
                # when data from nn is received, we take the detections array
                # that contains mobilenet-ssd results
                detections = in_nn.detections  # type: ignore

            if frame is not None:
                for detection in detections:
                    # for each bounding box, we first normalize it to match
                    # the frame size
                    bbox = frameNorm(
                        frame,
                        (
                            detection.xmin,
                            detection.ymin,
                            detection.xmax,
                            detection.ymax,
                        ),
                    )

                    # and then draw a rectangle on the frame to show the
                    # actual result
                    cv2.rectangle(
                        frame,
                        (bbox[0], bbox[1]),
                        (bbox[2], bbox[3]),
                        (255, 0, 0), 2,
                    )
                # After all the drawing is finished, we show the frame on the
                # screen
                cv2.imshow("preview", frame)
                cv2.imwrite("log\\test_frame.jpg", frame)

            # at any time, you can press "q" and exit the main loop,
            # therefore exiting the program itself
            if cv2.waitKey(1) == ord('q'):
                break

    wLahn
    I feel like it's either because you are not manually selecting the camera socket (SR-POE has RGB cameras connected to CAM_B and CAM_C, default is CAM_A though), or because your opencv version is broken.

    Thanks,
    Jaka

      jakaskerl Thanks for your help!

      As you pointed out, the reason was that I tried to sample CAMA (as per default),
      works now as expected when selecting the correct socket (CAMB/CAMC)