• DepthAI-v2
  • Oak D SR yolo v8 spatial detection network

Hi yjenkim
Use left and right ColorCameras and link them to stereo depth. Also link left to the Yolo. The alignment will be to left color camera by default so need for additional calls.

Thanks,
Jaka

    When I use the modified links, I get an error that says "Specified camera board socket already used." I suspect this is due to the same CAM_B being linked to multiple things. Am I understanding this right? How can I fix this?

    jakaskerl

    When I use the modified links, I get an error that says "Specified camera board socket already used." I suspect this is due to the same CAM_B being linked to multiple things. Am I understanding this right? How can I fix this?

    @yjenkim you are trying to create the CAM_B twice. Just remove all lines related to camRgb and use colorLeft instead (as color stream source).

      erik Aha, thanks.

      [19443010B17B772700] [1.5.2] [2.814] [StereoDepth(3)] [error] Left input image stride ('900') should be equal to its width ('300'). Skipping frame!

      What does this error mean? Is this referring to the stride of the model or is it fps related?

      I do set the output size correctly for stereo, so I'm a bit confused.

      @yjenkim can you prepare MRE? Sending over screenshot of the code isn't the most user-friendly.

        erik

        pipeline = dai.Pipeline()
        
        spatialDetectionNetwork = pipeline.create(dai.node.YoloSpatialDetectionNetwork)
        colorLeft = pipeline.create(dai.node.ColorCamera)
        colorRight = pipeline.create(dai.node.ColorCamera)
        stereo = pipeline.create(dai.node.StereoDepth)
        
        xoutRgb = pipeline.create(dai.node.XLinkOut)
        nnOut = pipeline.create(dai.node.XLinkOut)
        xoutBoundingBoxDepthMapping = pipeline.create(dai.node.XLinkOut)
        xoutDepth = pipeline.create(dai.node.XLinkOut)
        
        xoutRgb.setStreamName("rgb")
        nnOut.setStreamName("nn")
        xoutBoundingBoxDepthMapping.setStreamName("boundingBoxDepthMapping")
        xoutDepth.setStreamName("depth")
        
        colorLeft.setPreviewSize(640,640)
        colorRight.setPreviewSize(640,640)
        
        colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
        colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
        
        colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
        colorLeft.setVideoSize(W, H)
        colorLeft.setFps(30)
        
        colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
        colorRight.setVideoSize(W, H)
        colorRight.setFps(30)
        
        stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
        stereo.setDepthAlign(dai.CameraBoardSocket.CAM_B)
        stereo.setOutputSize(W, H)
        stereo.setLeftRightCheck(True)
        stereo.setSubpixel(True)
        
        spatialDetectionNetwork.setBlobPath(nnPath)
        spatialDetectionNetwork.setConfidenceThreshold(confidenceThreshold)
        spatialDetectionNetwork.setNumClasses(classes)
        spatialDetectionNetwork.setCoordinateSize(coordinates)
        spatialDetectionNetwork.setAnchors(anchors)
        spatialDetectionNetwork.setAnchorMasks(anchorMasks)
        spatialDetectionNetwork.setIouThreshold(iouThreshold)
        spatialDetectionNetwork.setNumInferenceThreads(2)
        spatialDetectionNetwork.input.setBlocking(False)
        
        colorLeft.video.link(stereo.left)
        colorRight.video.link(stereo.right)
        
        spatialDetectionNetwork.passthrough.link(xoutRgb.input)
        spatialDetectionNetwork.out.link(nnOut.input)
        spatialDetectionNetwork.boundingBoxMapping.link(xoutBoundingBoxDepthMapping.input)
        stereo.depth.link(spatialDetectionNetwork.inputDepth)
        spatialDetectionNetwork.passthroughDepth.link(xoutDepth.input)

        @yjenkim this is not MRE because I can't just run the code and try it out. I'm missing the blob path and the host-side code.

          erik I can't attach the blob and .py files here because it says format not accepted or file too large. Can I have an email?

          @yjenkim this does not work, I get RuntimeError: StereoDepth(5) - No output of StereoDepth is connected/used!. If I uncomments some of the pipeline building at the bottom I get different error. Please debug it yourself and make sure when I call python script it will "just work".

            erik Sorry, sent the wrong version. This one's I've checked to make sure that it gets the error I wish to address. The output queues return None so when I try to access it, ie. "frame = inRgb.getCvFrame()", it throws an error--I suspect it's a mistake in the way the pipeline was set up with the cameras.

            @yjenkim tryGet() will return None if there's no message in queue. CHange to .get() which will block until a message is received, and it should work (wont throw the same error).

                while True:
                    print("Getting frame")
                    
                    inRgb = qRgb.get()
                    print("Got qRgb frame")
                    # print(inRgb)
                    inDet = qDet.get()
                    print("Got qDet frame")
                    # print(inDet)
                    depth = depthQueue.get()
                    print("Got depthQueue frame")

              erik My issue is that the queue isn't outputting any messages, even when I connect my camera and give it input.

              @yjenkim again, tryGet() is non-blocking, it will return None immediately (as camera is just starting up) and program will crash.

                13 days later

                erik The depth values I get seem very off. I hold it about 30 cm from the camera and the spatial coordinates emitted from the bounding box are >1m range.