• DepthAI-v2
  • Camera stops providing frames after autofocus

Howdy!
I've been working on writing a C++ application to stream video from the DAI camera over RTSP. I am currently using the Oak-D USB camera, which I have confirmed does not have these same issues in Python. After getting the streaming framework itself written and tested, I was hoping to integrate a simple, unprocessed ColorCamera feed. However, on launching the sever and connecting with a client, the camera provides 20 frames of video (counted using system prints) before timing out the client connection. I suspected that this was a problem not with the streaming itself, but the camera providing media to the server. I was able to confirm this by removing any streaming from the loop and just attempting to display the ColorCamera feed using OpenCV. Code snippet follows:

auto camRgb = pipeline.create<node::ColorCamera>();
    auto xoutVideo = pipeline.create<node::XLinkOut>();
    auto xoutPreview = pipeline.create<node::XLinkOut>();

    xoutVideo->setStreamName("video");
    xoutPreview->setStreamName("preview");

    camRgb->setPreviewSize(1920, 1080);
    camRgb->setBoardSocket(CameraBoardSocket::RGB);
    camRgb->setResolution(ColorCameraProperties::SensorResolution::THE_1080_P);
    camRgb->setColorOrder(ColorCameraProperties::ColorOrder::BGR);

    camRgb->video.link(xoutVideo->input);
    camRgb->preview.link(xoutPreview->input);
    // shared_device = make_shared<Device> (pipeline);
    device_ = new Device(pipeline);
    auto video = device_->getOutputQueue("video", 64, false);
    
    liveCtx * ctx_;

    ctx_ = new liveCtx;
    ctx_->frame_queue = video;
    while (true) {
        auto videoFrame = ctx_->frame_queue->get<ImgFrame>();
        imshow("video", videoFrame->getCvFrame());

        int key = waitKey(1);
        if (key == 'q' || key == 'Q') {return 0;}
    }   

where liveCtx is defined as follows:


typedef struct {
std::shared_ptr<DataOutputQueue> frame_queue;
GstClockTime timestamp;
} liveCtx;

Frames will display on program startup, through the camera autofocus, before the camera freezes. No errors are thrown, the program continues to run, but the camera does not seem to provide any more frames. Checking $dmesg -wH, nothing out of the ordinary seems to be occurring on the hardware side. I have also tried creating the Device with usb2Mode=true, which did not seem to change anything. I have to assume that there is something going on in my code which is keeping the camera from providing frames, or which fails to continue to request frames from the camera, but I cannot seem to identify where this would be happening. Any help would be appreciated!

In further debugging, I've been able to confirm that the camera's output queues no longer return True for has() calls. The number of frames found in the queue seems to vary, around 9-15, before it becomes empty. The queue is not being closed, has blocking set to false, and changing the queue calls from the ctx->frame_queue to video (the object created and assigned to getOutputQueue() when called) changes nothing in the behavior. I am mightily confused as to what could be causing this issue.

  • erik replied to this.

    Hello jithub ,
    From the code above it looks like you are streaming preview output as well, but you aren't reading those frames on the host side. By default, it's blocking behaviour, and after the queue on the host fills up (20 frames I believe), it will block/freeze the pipeline (see docs here). Try removing the xoutPreview node, I think that should fix your issue.
    Thanks, Erik