• Community
  • RTSP Streaming of Both Video And YOLO Detections

Hello,

First time poster here. I recently got an OAK-D W and am trying to stream via RTSP both video and YOLO detections. I have successfully been able to run an experiment that streams video over RTSP (luxonis/depthai-experiments/tree/master/gen2-rtsp-streaming) and a separate experiment that runs YOLO detections on the camera video (https://docs.luxonis.com/projects/sdk/en/latest/samples/NNComponent/sdk_yolo/), so now I'd like to combine these two capabilities.

One thing I noticed is that there was a dependency issue, at least between these two out-of-box examples that believe were using different versions of depthAI - I could not run both experiments on the same virtual environment, but got them to work in separate ones.

Any help sincerely appreciated!

    Hi KevinRomero
    RTSP should work on the latest depthai/depthai_sdk version. The problem will likely be that the server expects encoded data from the device. If you use sdk yolo example, you will get a standard image frame. So in order to pass it to the server, you need to either encode it on the host, or send it back to the device to encode.

    Another problem might be that RTSP server is running in different thread. The SDK is very annoying about anything not running in its thread and might not send any data at all.

    What about going the API approach?

    Thanks,
    Jaka

      Hi KevinRomero
      I'd start with this example. Then you paste the RTSP server class inside. You will need to encode the frame your receive from the queue as .h265 and send it using server.send().

      Then you should be able to access it on the other side.

      Thanks,
      Jaka

      2 months later

      Hey! So I have went with your approach and am attempting to merge the encoder example and the tiny yolo example (code below). My current error is detectionNetwork.out.link(videnc2.input). Could really use some help!

      RuntimeError: Cannot link 'DetectionNetwork.out' to 'VideoEncoder.in'

      i basically tried creating a new stream:

      FPS =30

      videnc2 = pipeline.create(dai.node.VideoEncoder)

      videnc2.setDefaultProfilePreset(FPS, dai.VideoEncoderProperties.Profile.H265_MAIN)

      detectionNetwork.out.link(videnc2.input) #this is also done above

      videnc2.bitstream.link(nnOut.input)

      nnout = pipeline.create(dai.node.XLinkOut)

      nnout.setStreamName("encoded2")

      videnc2.bitstream.link(nnout.input)

      Then I create a second queue and attempt to send it as a second paramater using .send_data:

      with dai.Device(pipeline, device_info) as device:

      encoded = device.getOutputQueue("encoded", maxSize=30, blocking=True)

      encoded2 = device.getOutputQueue("encoded2", maxSize=30, blocking=True) #TEST

      print("Setup finished, RTSP stream available under \"rtsp://localhost:8554/preview\"")

      while True:

      data = encoded.get().getData()

      data2 = encoded2.get().getData() #TEST

      server.send_data(data, data2)