• Reopen camera after device.close() throws segmentation fault

My code is quite long, but essentially I am running device.close() on depthai.Device. After this I try to reopen the camera but hit a hard error when I call device.getConnectedCameras(). Not sure what if I am doing something wrong here or what the issue is, would appreciate any suggestions or advice. I am running on a raspberry pi zero armv6. Thank you!

Error details:

Stack trace (most recent call last):
#5    Object "/home/pi/.local/lib/python3.9/site-packages/depthai.cpython-39-arm-linux-gnueabihf.so", at 0xb569ddeb, in  
#4    Object "/home/pi/.local/lib/python3.9/site-packages/depthai.cpython-39-arm-linux-gnueabihf.so", at 0xb56df557, in  
#3    Object "/home/pi/.local/lib/python3.9/site-packages/depthai.cpython-39-arm-linux-gnueabihf.so", at 0xb5972f93, in dai:😃eviceBase::getConnectedCameras()
#2    Object "/home/pi/.local/lib/python3.9/site-packages/depthai.cpython-39-arm-linux-gnueabihf.so", at 0xb59985a7, in nanorpc::core::client<nanorpc::packer::nlohmann_msgpack>::result nanorpc::core::client<nanorpc::packer::nlohmann_msg
pack>::call<>(unsigned long long)
#1    Object "/lib/arm-linux-gnueabihf/libc.so.6", at 0xb6cff90f, in  
#0    Object "/home/pi/.local/lib/python3.9/site-packages/depthai.cpython-39-arm-linux-gnueabihf.so", at 0xb5a434b3, in backward::SignalHandling::sig_handler(int, siginfo_t*, void*)
Segmentation fault (Address not mapped to object [0x8])
Segmentation fault

  • jakaskerl replied to this.
  • Hi bgro82
    What I changed: you had camera_device outside your loop. When ending first iteration, this would close the device, making it unavailable in the next iteration.

    Updated script:

    import time
    import depthai as dai
    import blobconverter
    
    def create_pipeline(depth):
        syncNN = True
    
        # Start defining a pipeline
        pipeline = dai.Pipeline()
        # pipeline.setOpenVINOVersion(version=dai.OpenVINO.Version.VERSION_2021_4)
        # Define a source - color camera
        colorCam = pipeline.create(dai.node.ColorCamera)
    
        # Define source and output for system info (temps/cpu)
        sysLog = pipeline.create(dai.node.SystemLogger)
        linkOut = pipeline.create(dai.node.XLinkOut)
    
        linkOut.setStreamName("sysinfo")
    
        # set system info pipeline to 1Hz sample rate
        sysLog.setRate(1)
    
        # Link
        sysLog.out.link(linkOut.input)
    
        if depth:
            mobilenet = pipeline.create(dai.node.MobileNetSpatialDetectionNetwork)
            monoLeft = pipeline.create(dai.node.MonoCamera)
            monoRight = pipeline.create(dai.node.MonoCamera)
            stereo = pipeline.create(dai.node.StereoDepth)
        else:
            mobilenet = pipeline.create(dai.node.MobileNetDetectionNetwork)
    
        colorCam.setPreviewSize(512, 512)
        colorCam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
        colorCam.setInterleaved(False)
        colorCam.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
    
        mobilenet.setBlobPath(blobconverter.from_zoo("person-vehicle-bike-detection-crossroad-1016", shaves=6, version="2022.1"))
    
        mobilenet.setConfidenceThreshold(0.5)
        mobilenet.input.setBlocking(False)
    
        if depth:
            monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
            monoLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
            monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
            monoRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
    
            # Setting node configs
            stereo.initialConfig.setConfidenceThreshold(255)
            stereo.depth.link(mobilenet.inputDepth)
            stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
    
            mobilenet.setBoundingBoxScaleFactor(0.5)
            mobilenet.setDepthLowerThreshold(100)
            mobilenet.setDepthUpperThreshold(5000)
    
            monoLeft.out.link(stereo.left)
            monoRight.out.link(stereo.right)
    
        xoutRgb = pipeline.create(dai.node.XLinkOut)
        xoutRgb.setStreamName("rgb")
        colorCam.preview.link(mobilenet.input)
    
        if syncNN:
            mobilenet.passthrough.link(xoutRgb.input)
        else:
            colorCam.preview.link(xoutRgb.input)
    
        xoutNN = pipeline.create(dai.node.XLinkOut)
        xoutNN.setStreamName("detections")
        mobilenet.out.link(xoutNN.input)
    
        return pipeline
    
    if __name__ == "__main__":
    
        while True:
            camera_device = dai.Device(create_pipeline(True))
            print("Initiating camera")
            # initiate camera and pipeline
            
            cams = camera_device.getConnectedCameras()
            
            depth_enabled = dai.CameraBoardSocket.CAM_B in cams and dai.CameraBoardSocket.CAM_C in cams
            # Start pipeline
    
            # Output queues will be used to get the rgb frames and nn data from the outputs defined above
            previewQueue = camera_device.getOutputQueue(name="rgb", maxSize=1, blocking=False)
            detectionNNQueue = camera_device.getOutputQueue(name="detections", maxSize=1, blocking=False)
            sysinfo_queue = camera_device.getOutputQueue(name="sysinfo", maxSize=1, blocking=False)
    
            frame = None
            detections = []
            frame_counter = 0
            color = (255, 255, 255)
    
            # camera loop
            print("Starting camera loop for 200 iterations")
            while frame_counter < 200:
                inPreview = previewQueue.get()
                frame_stereo = inPreview.getCvFrame()
    
                inNN = detectionNNQueue.get()
                detections = inNN.detections
    
                frame_counter += 1
    
            print("Closing camera")
            camera_device.close()
    
            time.sleep(10)

    Thanks,
    Jaka

    Hi bgro82
    You will have to wait some time before connecting to your camera again. Also make sure you are running the latest depthai version.

    Thanks,
    Jaka

    6 days later

    Thanks, I am running depthai 2.22.0.0 and am still hitting the same error even with waiting up to 4 minutes between closing the camera and attempting to reopen. Is there a set amount of time I should be trying to wait? Is there anything else I can/should do before I try to reopen the camera?

      Hi bgro82
      I was thinking more 10 seconds max, so the camera not being able to connect after 4 minute is a different problem then. Can you shave down the code as much as possible for the error to still occur and paste the code below? Keep as little functionality as you can.

      Thanks,
      Jaka

      sure thing, just culled things down and verified this still throws the same error. It wouldn't let me upload the .py file here but let me know if easier to send to you somehow.

      import time
      import depthai as dai
      import blobconverter

      def create_pipeline(depth):
      syncNN = True

      # Start defining a pipeline
      pipeline = dai.Pipeline()
      # pipeline.setOpenVINOVersion(version=dai.OpenVINO.Version.VERSION_2021_4)
      # Define a source - color camera
      colorCam = pipeline.create(dai.node.ColorCamera)

      # Define source and output for system info (temps/cpu)
      sysLog = pipeline.create(dai.node.SystemLogger)
      linkOut = pipeline.create(dai.node.XLinkOut)

      linkOut.setStreamName("sysinfo")

      # set system info pipeline to 1Hz sample rate
      sysLog.setRate(1)

      # Link
      sysLog.out.link(linkOut.input)

      if depth:
      mobilenet = pipeline.create(dai.node.MobileNetSpatialDetectionNetwork)
      monoLeft = pipeline.create(dai.node.MonoCamera)
      monoRight = pipeline.create(dai.node.MonoCamera)
      stereo = pipeline.create(dai.node.StereoDepth)
      else:
      mobilenet = pipeline.create(dai.node.MobileNetDetectionNetwork)

      colorCam.setPreviewSize(512, 512)
      colorCam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
      colorCam.setInterleaved(False)
      colorCam.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)

      mobilenet.setBlobPath(blobconverter.from_zoo("person-vehicle-bike-detection-crossroad-1016", shaves=6, version="2022.1"))

      mobilenet.setConfidenceThreshold(0.5)
      mobilenet.input.setBlocking(False)

      if depth:
      monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      monoLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
      monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      monoRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)

      # Setting node configs
      stereo.initialConfig.setConfidenceThreshold(255)
      stereo.depth.link(mobilenet.inputDepth)
      stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)

      mobilenet.setBoundingBoxScaleFactor(0.5)
      mobilenet.setDepthLowerThreshold(100)
      mobilenet.setDepthUpperThreshold(5000)

      monoLeft.out.link(stereo.left)
      monoRight.out.link(stereo.right)

      xoutRgb = pipeline.create(dai.node.XLinkOut)
      xoutRgb.setStreamName("rgb")
      colorCam.preview.link(mobilenet.input)

      if syncNN:
      mobilenet.passthrough.link(xoutRgb.input)
      else:
      colorCam.preview.link(xoutRgb.input)

      xoutNN = pipeline.create(dai.node.XLinkOut)
      xoutNN.setStreamName("detections")
      mobilenet.out.link(xoutNN.input)

      return pipeline

      with dai.Device() as camera_device:

      while True:
      print("Initiating camera")
      # initiate camera and pipeline
      cams = camera_device.getConnectedCameras()
      depth_enabled = dai.CameraBoardSocket.CAM_B in cams and dai.CameraBoardSocket.CAM_C in cams

      # Start pipeline
      camera_device.startPipeline(create_pipeline(depth_enabled))

      # Output queues will be used to get the rgb frames and nn data from the outputs defined above
      previewQueue = camera_device.getOutputQueue(name="rgb", maxSize=1, blocking=False)
      detectionNNQueue = camera_device.getOutputQueue(name="detections", maxSize=1, blocking=False)
      sysinfo_queue = camera_device.getOutputQueue(name="sysinfo", maxSize=1, blocking=False)

      frame = None
      detections = []
      frame_counter = 0
      color = (255, 255, 255)

      # camera loop
      print("Starting camera loop for 200 iterations")
      while frame_counter < 200:
      inPreview = previewQueue.get()
      frame_stereo = inPreview.getCvFrame()

      inNN = detectionNNQueue.get()
      detections = inNN.detections

      frame_counter += 1

      print("Closing camera")
      camera_device.close()

      time.sleep(120)

        Hi bgro82
        What I changed: you had camera_device outside your loop. When ending first iteration, this would close the device, making it unavailable in the next iteration.

        Updated script:

        import time
        import depthai as dai
        import blobconverter
        
        def create_pipeline(depth):
            syncNN = True
        
            # Start defining a pipeline
            pipeline = dai.Pipeline()
            # pipeline.setOpenVINOVersion(version=dai.OpenVINO.Version.VERSION_2021_4)
            # Define a source - color camera
            colorCam = pipeline.create(dai.node.ColorCamera)
        
            # Define source and output for system info (temps/cpu)
            sysLog = pipeline.create(dai.node.SystemLogger)
            linkOut = pipeline.create(dai.node.XLinkOut)
        
            linkOut.setStreamName("sysinfo")
        
            # set system info pipeline to 1Hz sample rate
            sysLog.setRate(1)
        
            # Link
            sysLog.out.link(linkOut.input)
        
            if depth:
                mobilenet = pipeline.create(dai.node.MobileNetSpatialDetectionNetwork)
                monoLeft = pipeline.create(dai.node.MonoCamera)
                monoRight = pipeline.create(dai.node.MonoCamera)
                stereo = pipeline.create(dai.node.StereoDepth)
            else:
                mobilenet = pipeline.create(dai.node.MobileNetDetectionNetwork)
        
            colorCam.setPreviewSize(512, 512)
            colorCam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
            colorCam.setInterleaved(False)
            colorCam.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
        
            mobilenet.setBlobPath(blobconverter.from_zoo("person-vehicle-bike-detection-crossroad-1016", shaves=6, version="2022.1"))
        
            mobilenet.setConfidenceThreshold(0.5)
            mobilenet.input.setBlocking(False)
        
            if depth:
                monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
                monoLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
                monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
                monoRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
        
                # Setting node configs
                stereo.initialConfig.setConfidenceThreshold(255)
                stereo.depth.link(mobilenet.inputDepth)
                stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
        
                mobilenet.setBoundingBoxScaleFactor(0.5)
                mobilenet.setDepthLowerThreshold(100)
                mobilenet.setDepthUpperThreshold(5000)
        
                monoLeft.out.link(stereo.left)
                monoRight.out.link(stereo.right)
        
            xoutRgb = pipeline.create(dai.node.XLinkOut)
            xoutRgb.setStreamName("rgb")
            colorCam.preview.link(mobilenet.input)
        
            if syncNN:
                mobilenet.passthrough.link(xoutRgb.input)
            else:
                colorCam.preview.link(xoutRgb.input)
        
            xoutNN = pipeline.create(dai.node.XLinkOut)
            xoutNN.setStreamName("detections")
            mobilenet.out.link(xoutNN.input)
        
            return pipeline
        
        if __name__ == "__main__":
        
            while True:
                camera_device = dai.Device(create_pipeline(True))
                print("Initiating camera")
                # initiate camera and pipeline
                
                cams = camera_device.getConnectedCameras()
                
                depth_enabled = dai.CameraBoardSocket.CAM_B in cams and dai.CameraBoardSocket.CAM_C in cams
                # Start pipeline
        
                # Output queues will be used to get the rgb frames and nn data from the outputs defined above
                previewQueue = camera_device.getOutputQueue(name="rgb", maxSize=1, blocking=False)
                detectionNNQueue = camera_device.getOutputQueue(name="detections", maxSize=1, blocking=False)
                sysinfo_queue = camera_device.getOutputQueue(name="sysinfo", maxSize=1, blocking=False)
        
                frame = None
                detections = []
                frame_counter = 0
                color = (255, 255, 255)
        
                # camera loop
                print("Starting camera loop for 200 iterations")
                while frame_counter < 200:
                    inPreview = previewQueue.get()
                    frame_stereo = inPreview.getCvFrame()
        
                    inNN = detectionNNQueue.get()
                    detections = inNN.detections
        
                    frame_counter += 1
        
                print("Closing camera")
                camera_device.close()
        
                time.sleep(10)

        Thanks,
        Jaka

        oh man, so simple, I should have realized that. I appreciate you taking the time to help me solve that, thank you!

        6 days later

        @jakaskerl this question is not directly related to my earlier one. But is it possible to run the same code in a "passthrough" type of state only where only RGB frames are pulled from the camera but no NN is running? Not looking to have you write my code for me but any suggestions would be appreciated. I've tried excluding everything with mobilenet and xoutNN but that still gives some errors: "RuntimeError: StereoDepth(5) - No output of StereoDepth is connected/used!" I'm not sure if what I'm trying to do just is not possible or if I'm going about it in the wrong way.

          Hi bgro82
          Yes, it's possible, but you will have to add under IF block anything to do with NN's side of the pipeline. If you only want rgb, then you have to make sure you don't create a StereoDepth node --> this will result in the error you are experiencing.

          Remove the red part, and directly link colorcamera to xlinkout. You should be able to achieve this with normal IF statements.

          Thanks,
          Jaka

          Amazing, figured it out, thank you so much!