We’re using the oakd-pro PoE (Rvc2 arch). Our cv application runs on the edge and transmits inference results along with the image stream using TCP over ethernet. Our front-end application decodes the TCP packets to display the images and ai inference results (bounding box coordinates, no. of keypoints, detected etc.). Would it also be possible to architect this solution in these methods? i.e. Is it possible to have two separate processes running in edge mode?
1) To try to run two different models on the video feed at the same time (maybe a pose estimation and a classification).
Camera Image -> Model 1 & Model 2 -> TCP Out (image+inference results)
2) To run one heavy model in one of the processes (which might reduce the FPS by a lot) but to still have another process with solely the video stream output live, so we can have a smooth output video stream but with slower inference running in the background
Camera Image -> Model 1 -> Model 2 -> TCP Out (inference results only)
-> TCP Out (image only)
Our goal is to find the best solutions arch that finds a good balance between model size and application latency.