I know there are some experimental examples of RTSP streaming using encoded video outputs from the OAK cameras. My eventual goal is the streaming of processed video, like stereo depth, object ID/detection, etc, over this RTSP stream. My understanding, from some comments I've read, is that these processed output nodes cannot be linked to encoder nodes in the onboard pipeline. Is this accurate? If so, has anyone had luck getting processed video to stream over RTSP using OpenCV frames direct from pipeline nodes, rather than encoded bitstream like in the example?