• DepthAI Camera Crashing - Due to USB bandwidth issues?

I am currently working on a project that uses a YOLO v6 detection network in order to recognize vehicles as they drive through traffic. My OAK-1 Lite camera is positioned on an overhead bridge, so I am looking down on the vehicles. I am using the YOLONetDetectionNetwork and ObjectTracker nodes, which is working extremely well - I can run it at 20-21 FPS. However, the camera only works for roughly 3-5 minutes. After that, it crashes with the following message:

RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'preview' (X_LINK_ERROR)'

I am connecting the camera to a Raspberry Pi 3 through a USB3-USB type C connector. Also, the camera seems to crash once it has to make many detections at once; it can work for as long as needed in a stationary setting. What is causing this problem? Thanks.

    Hi zstarr4098
    Could be that the RPI is not capable of processing the info fast enough which creates a blockage in the queues and kills the pipeline. It could also be that the NN is to demanding when multiple vehicles are present at once.
    I believe you can solve this by making your links/queues non-blocking. This will ensure that if a queue saturates for a moment, the remaining frames will be dropped instead of blocking the whole pipe.

    Thanks,
    Jaka

    Hi @jakaskerl ,

    Currently, I have 2 output queues, which are both set at maxSize=4 and blocking=False. Should my links be non-blocking as well, and how would I do that? Also, could the USB connection have something to do with it? (I would prefer to use this YOLO v6 NN, as it is the only one that works for my project). I attached my code below. Thanks!

    from pathlib import Path
    import cv2
    import depthai as dai
    import numpy as np
    import time as time1
    from time import time, ctime
    import blobconverter
    from pynput import mouse
    import params
    import RPi.GPIO as GPIO
    import os
    import datetime
    
    os.chdir("/home/fyp2022/Desktop/YOLODataCollector/")
    
    GPIO.setmode(GPIO.BCM)
    GPIO.setwarnings(False)
    GPIO.setup(14, GPIO.OUT)
    GPIO.setup(23, GPIO.OUT)
    
    labelMap = [
        "pedestrian",     "bicycle",    "car",           "motorbike",     "aeroplane",   "bus",           "train",
        "truck",          "boat",       "traffic light", "fire hydrant",  "stop sign",   "parking meter", "bench",
        "bird",           "cat",        "dog",           "horse",         "sheep",       "cow",           "elephant",
        "bear",           "zebra",      "giraffe",       "backpack",      "umbrella",    "handbag",       "tie",
        "suitcase",       "frisbee",    "skis",          "snowboard",     "sports ball", "kite",          "baseball bat",
        "baseball glove", "skateboard", "surfboard",     "tennis racket", "bottle",      "wine glass",    "cup",
        "fork",           "knife",      "spoon",         "bowl",          "banana",      "apple",         "sandwich",
        "orange",         "broccoli",   "carrot",        "hot dog",       "pizza",       "donut",         "cake",
        "chair",          "sofa",       "pottedplant",   "bed",           "diningtable", "toilet",        "tvmonitor",
        "laptop",         "mouse",      "remote",        "keyboard",      "cell phone",  "microwave",     "oven",
        "toaster",        "sink",       "refrigerator",  "book",          "clock",       "vase",          "scissors",
        "teddy bear",     "hair drier", "toothbrush"
    ]
    
    allowedLabels = ["bicycle", "car", "motorbike", "bus", "truck"]
    
    def getAllowedItems(items):
        whiteList = []
        for i in items:
            if i in labelMap:
                whiteList.append(labelMap.index(i))
            if i not in labelMap:
                print("could not find {} in the label map!".format(i))
        return whiteList
    
    def time_convert(sec):
        mins = sec // 60
        sec = sec % 60
        hours = mins // 60
        mins = mins % 60
        deltaTime = datetime.time(hours, mins, sec)
        deltaTime = str(deltaTime).replace(':', '')
        return deltaTime
    
    # Create pipeline
    pipeline = dai.Pipeline()
    
    nnPath = blobconverter.from_openvino("./YOLOv6t_COCO/yolov6t_coco_416x416.xml", "./YOLOv6t_COCO/yolov6t_coco_416x416.bin", shaves=6)
    
    t = time()
    timeOfCreation = str(ctime(t))
    timeInit = t
    
    # Define sources and outputs
    camRgb = pipeline.create(dai.node.ColorCamera)
    detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
    objectTracker = pipeline.create(dai.node.ObjectTracker)
    
    xlinkOut = pipeline.create(dai.node.XLinkOut)
    trackerOut = pipeline.create(dai.node.XLinkOut)
    
    xlinkOut.setStreamName("preview")
    trackerOut.setStreamName("tracklets")
    
    # Properties
    camRgb.setPreviewSize(416, 416)
    camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    camRgb.setInterleaved(False)
    camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
    camRgb.setFps(22)
    
    detectionNetwork.setConfidenceThreshold(0.5)
    detectionNetwork.setNumClasses(80)
    detectionNetwork.setCoordinateSize(4)
    detectionNetwork.setAnchors([10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319])
    detectionNetwork.setAnchorMasks({"side26": [1, 2, 3], "side13": [3, 4, 5]})
    detectionNetwork.setIouThreshold(0.5)
    detectionNetwork.setBlobPath(nnPath)
    detectionNetwork.setNumInferenceThreads(2)
    detectionNetwork.input.setBlocking(False)
    
    objectTracker.setDetectionLabelsToTrack(getAllowedItems(allowedLabels))  # track only person
    # possible tracking types: ZERO_TERM_COLOR_HISTOGRAM, ZERO_TERM_IMAGELESS, SHORT_TERM_IMAGELESS, SHORT_TERM_KCF
    objectTracker.setTrackerType(dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM)
    # take the smallest ID when new object is tracked, possible options: SMALLEST_ID, UNIQUE_ID
    objectTracker.setTrackerIdAssignmentPolicy(dai.TrackerIdAssignmentPolicy.SMALLEST_ID)
    
    # Linking
    camRgb.preview.link(detectionNetwork.input)
    objectTracker.passthroughTrackerFrame.link(xlinkOut.input)
    
    
    #camRgb.preview.link(objectTracker.inputTrackerFrame)
    detectionNetwork.passthrough.link(objectTracker.inputTrackerFrame)
    
    detectionNetwork.passthrough.link(objectTracker.inputDetectionFrame)
    detectionNetwork.out.link(objectTracker.inputDetections)
    objectTracker.out.link(trackerOut.input)
    
    
    GPIO.output(14, GPIO.HIGH)
    GPIO.output(23, GPIO.HIGH)
    leftRight = True
    def click(x, y, button, pressed):
        global leftRight
        if str(button) == "Button.left" and leftRight is not True:
            leftRight = True
            GPIO.output(23, GPIO.HIGH)
            time1.sleep(0.1)
            GPIO.output(23, GPIO.LOW)
        elif str(button) == "Button.right" and leftRight is not False:
            leftRight = False
            GPIO.output(23, GPIO.HIGH)
            time1.sleep(0.1)
            GPIO.output(23, GPIO.LOW)
            time1.sleep(0.1)
            GPIO.output(23, GPIO.HIGH)
            time1.sleep(0.1)
            GPIO.output(23, GPIO.LOW)
    
    listener = mouse.Listener(on_click=click)
    listener.start()
    
    time1.sleep(45)
    
    GPIO.output(14, GPIO.HIGH)
    GPIO.output(23, GPIO.HIGH)
    time1.sleep(0.2)
    GPIO.output(14, GPIO.LOW)
    GPIO.output(23, GPIO.LOW)
    time1.sleep(0.2)
    GPIO.output(14, GPIO.HIGH)
    GPIO.output(23, GPIO.HIGH)
    time1.sleep(0.2)
    GPIO.output(14, GPIO.LOW)
    GPIO.output(23, GPIO.LOW)
    
    dataCollected = False
    
    # Connect to device and start pipeline
    try:
        with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.HIGH) as device:
    
            GPIO.output(14, GPIO.LOW)
            GPIO.output(23, GPIO.LOW)
    
            preview = device.getOutputQueue("preview", maxSize=4, blocking=False)
            tracklets = device.getOutputQueue("tracklets", maxSize=4, blocking=False)
    
            startTime = time1.monotonic()
            counter = 0
            fps = 0
            frame = None
    
            sensorBox = params.defaultSensorBox
            sensorBoxY = params.defaultSensorBoxY
    
            leftBox = [0, 208]
            rightBox = [208, 416]
    
            bottomBox = [208, 416]
            upperBox = [0, 208]
    
            vehicles = []
            totalVehicles = 0
    
            vehicleDir = None
    
            while(True):
                timer = round(time1.monotonic())
                try:
                    imgFrame = preview.get()
                    track = tracklets.get()
                except:
                    imgFrame = preview.tryGet()
                    track = tracklets.tryGet()
    
                counter+=1
                current_time = time1.monotonic()
                if (current_time - startTime) > 1 :
                    fps = counter / (current_time - startTime)
                    counter = 0
                    startTime = current_time
    
                color = (255, 0, 0)
                yellow = (255, 255, 0)
                frame = imgFrame.getCvFrame()
                trackletsData = track.tracklets
                for t in trackletsData:
                    roi = t.roi.denormalize(frame.shape[1], frame.shape[0])
                    x1 = int(roi.topLeft().x)
                    y1 = int(roi.topLeft().y)
                    x2 = int(roi.bottomRight().x)
                    y2 = int(roi.bottomRight().y)
    
                    bbox = [x1, y1, x2, y2]
    
                    global midpoint
                    midpoint = (x1+x2)/2
                    midpoint = int(round(midpoint))
    
                    global midpointY
                    midpointY = (bbox[1] + bbox[3])/2
                    midpointY = int(round(midpointY))
    
                    vehicleID = t.id
                    vehicleStatus = str(t.status.name)
    
                    labelID = t.label
    
                    try:
                        label = labelMap[t.label]
                    except:
                        label = t.label
    
                    if leftRight:
                        if vehicleStatus == "NEW" and bbox[2] <= leftBox[1]:
                            vehicleDir = 0
                        elif vehicleStatus == "NEW" and bbox[0] >= rightBox[0]:
                            vehicleDir = 1
                        if vehicleID not in vehicles and sensorBox[0] <= midpoint <= sensorBox[1]:
                            if vehicleStatus == "TRACKED":
                                t = time()
                                currentTime = ctime(t)
                                stopwatchTime = round(t - timeInit)
                                vehicles.append(vehicleID)
                                totalVehicles += 1
                                with open(f"./Data/Data Collected @ {timeOfCreation}.txt", "a") as fp:
                                    fp.write("{},{},{},{}\n".format(totalVehicles, time_convert(stopwatchTime), vehicleDir, labelID))
                                dataCollected = True
                        if vehicleStatus == "REMOVED" or sensorBox[0] > midpoint or sensorBox[1] < midpoint:
                            if vehicleID in vehicles:
                                vehicles.remove(vehicleID)
                    elif not leftRight:
                        if vehicleStatus == "NEW" and bbox[1] <= upperBox[1]:
                            vehicleDir = 0
                        elif vehicleStatus == "NEW" and bbox[3] >= bottomBox[0]:
                            vehicleDir = 1
    
                        if vehicleID not in vehicles and sensorBoxY[0] <= midpointY <= sensorBoxY[1]:
                            if vehicleStatus == "TRACKED":
                                t = time()
                                currentTime = ctime(t)
                                stopwatchTime = round(t - timeInit)
                                vehicles.append(vehicleID)
                                totalVehicles += 1
                                with open(f"./Data/Data Collected @ {timeOfCreation}.txt", "a") as fp:
                                    fp.write("{},{},{},{}\n".format(totalVehicles, time_convert(stopwatchTime), vehicleDir, labelID))
                                dataCollected = True
                        if vehicleStatus == "REMOVED" or sensorBoxY[0] > midpointY or sensorBoxY[1] < midpointY:
                            if vehicleID in vehicles:
                                vehicles.remove(vehicleID)
                                print("removed!")
    
                    if len(vehicles) != 0:
                            GPIO.output(23, GPIO.HIGH)
                    elif len(vehicles) == 0:
                        GPIO.output(23, GPIO.LOW)
                    """cv2.putText(frame, str(label), (x1 + 10, y1 + 20), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)
                    cv2.putText(frame, f"ID: {[vehicleID]}", (x1 + 10, y1 + 35), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)
                    cv2.putText(frame, vehicleStatus, (x1 + 10, y1 + 50), cv2.FONT_HERSHEY_TRIPLEX, 0.5, 255)
                    cv2.rectangle(frame, (x1, y1), (x2, y2), color, cv2.FONT_HERSHEY_SIMPLEX)
    
                if leftRight:
                    cv2.rectangle(frame, (sensorBox[0], 0), (sensorBox[1], 416), (0, 0, 255), 1)
                    cv2.rectangle(frame, (leftBox[0], 0), (leftBox[1], 416), yellow, 1)
                    cv2.rectangle(frame, (rightBox[0], 0), (rightBox[1], 416), yellow, 1)
                elif not leftRight:
                    cv2.rectangle(frame, (0, sensorBoxY[0]), (416, sensorBoxY[1]), (0, 0, 255), 1)
                    cv2.rectangle(frame, (0, bottomBox[0]), (416, bottomBox[1]), yellow, 1)
                    cv2.rectangle(frame, (0, upperBox[0]), (416, upperBox[1]), yellow, 1)
    
                cv2.putText(frame, "NN fps: {:.2f}".format(fps), (2, frame.shape[0] - 4), cv2.FONT_HERSHEY_TRIPLEX, 0.7, color)
    
                cv2.imshow("tracker", frame)"""       
                if (timer % 2) == 0:
                        GPIO.output(14, GPIO.HIGH)
                elif (timer % 2) != 0:
                    GPIO.output(14, GPIO.LOW)
    
                cv2.waitKey(1)
    except:
        if dataCollected:
            GPIO.output(14, GPIO.LOW)
            GPIO.output(23, GPIO.LOW)
            time1.sleep(1)
            for i in range (5):
                GPIO.output(14, GPIO.HIGH)
                time1.sleep(0.1)
                GPIO.output(14, GPIO.LOW)
                time1.sleep(0.1)
        elif not dataCollected:
            print("error!")
            for i in range (5):
                GPIO.output(23, GPIO.LOW)
                time1.sleep(0.1)
                GPIO.output(23, GPIO.HIGH)
                time1.sleep(0.1)
            GPIO.output(23, GPIO.LOW)
        print("code cancelled")

    zstarr4098 Have you checked power? The Pi 3B uses a microUSB port which is terrible for supplying decent amperage. Get a USB power meter pass-through plug. I use this one:

    EDIT: TERRIBLE FORUM SOFTWARE it is a US amazon link with gp slash product slash B07FMQZVW2 at the end

    and plug it between the depthai device and the USB-C converter. If you see it dips more than 5-10% below 5V while it is running then you the resistance in the power cable is too high or the supply is dropping voltage to keep up with power requirements. Look up 'undervoltage in raspberry pi' for information. My solution to this is to use a Y cable from the USB and put data through the pi and power through the same powersupply the pi uses so that the microusb cable is not the limiter. Note that this will not work if the supply itself is the limiting factor or if you use two different supplies.

    PS Luxonis, please fix your text editor it is atrocious.

      Hi vital ,

      I think that the power could certainly be the issue - I have my Raspberry Pi connected to a power bank of 10,000 mA with a micro-USB output of 5V-2.4A. It does give me a low voltage warning. I currently also have a 20,000mA power bank with output of 5V-3A, so I will use that one to power the DepthAI camera.

      Clarifying questions about the Y cable solution: since my DepthAI camera only has one USB-C port, how would I orientate the cable? Also, if the output of this power bank is 5V-3A and the Raspberry Pi only needs 5V-2.5A, would I need to use this solution? Thanks.

        zstarr4098 Powerbanks are made for charging devices, not powering them in use. The batteries in devices are 3.7V per cell nominal, so drooping below 5V is not generally an issue for them. Thus many powerbanks are terrible at regulating output voltage. If it has a USB-C output with PD on it then you are in better straits because the PD spec requires a steady 5V output. The official Pi power supply gives out 5.1V instead of 5V to compensate for voltage dropping on the microusb cable, so even at a steady 5V you might have problems.

        The Y cable I use has an USB-3 blue port and a USB-2 white port on the Y end and a USB-3 blue port on the single end. The blue Y is for data and the white Y is for power and the single is for the depthai device. I got it on the site which shall not be named lest the forum software go nuts on me but if you search for "zdyCGTime USB 3.0 Extender Cable USB 3.0 Female to USB 3.0 & USB 2.0 Male Extra Power Data Y Splitter Charger Extension Cable(33CM/13inch)" you will find it.

          Hi vital ,

          Thanks. In that case, should I find an alternate way of powering the camera (My project is outside, and must be powered from a remote source)? Would using a battery pack or something be better for that?

            zstarr4098 FYI: Don't use two different powerbanks at once. Use the same powerbank with two different ports.

            zstarr4098 Just get a powerbank that has two USB-C PD ports on it and make sure the USB-C to USB-A converters you use have the 5.1k resistors in them to tell the powerbank you want 5V.

              vital Ok. So to summarize: I need to get a power bank (would it matter 10,000 or 20,000 mA) that has two USB-C PD ports, a Y cable, and 2 USB-C to USB-A converters that have 5.1V resistors.

              2 questions: Can I not just use a separate power bank to power my Raspberry Pi (it is attached to the case)? Also, how can I check to see if my USB-C to USB-A converters have 5.1V resistors? Thanks.

                zstarr4098

                Part 1: Yes

                Part 2: You can't run them off different power sources because the USB data and the USB power are still linked together through the Depthai device. The two power sources will have their own voltage regulators in them which change slightly depending on use and heat and also work at different on-off cycles (switch mode regulation turns voltage on and off quickly to regulate it) and they will not be in sync. It may work, or it may give you strange errors you try and debug forever or it may cause a black hole which sends you Pi into another dimension -- the fact is 'I don't know what will happen if I do this' is a bad idea when you are trying to get something working that has never worked before.

                Part 3: make sure it supports 'fast charging' or 'charging' not just 'high speed data'.

                  vital oh okay that sounds scary… Thanks a lot! I will post an update once I test it out.

                  -Zach

                  Hello vital,

                  I can't find the Y cable for the USB anywhere unless I order it - which will take more than two weeks to arrive. Is there any other way I could work around the voltage issue?

                  zstarr4098 Get some 26-22ga insulated butt connectors and two USB 2.0 cables. Cut the ends off the two cables and strip two ends and one end. Twist the black wires from the two ends together and put them in a butt connector with the black wire from the single end and crimp it. Connect the red wire from an end to the single end and the green and white wires from the other end to the single end. Ziptie them. Voila, you have a USB-Y cable. Red wire is power, Green and White are data.