We’re using the oakd-pro PoE (Rvc2 arch). Our cv application runs on the edge and transmits inference results along with the image stream using TCP over ethernet. Our front-end application decodes the TCP packets to display the images and ai inference results (bounding box coordinates, no. of keypoints, detected etc.).  Would it also be possible to architect this solution in these methods? i.e. Is it possible to have two separate processes running in edge mode?

1) To try to run two different models on the video feed at the same time (maybe a pose estimation and a classification).

Camera Image -> Model 1 & Model 2 -> TCP Out (image+inference results)

 2) To run one heavy model in one of the processes (which might reduce the FPS by a lot) but to still have another process with solely the video stream output live, so we can have a smooth output video stream but with slower inference running in the background

Camera Image -> Model 1 -> Model 2 -> TCP Out (inference results only)
              -> TCP Out (image only)

@RakshithSingh

Our goal is to find the best solutions arch that finds a good balance between model size and application latency.

    akhil
    Both should be possible afaik. You will likely need to create separate threads (or Script nodes) for Model 1 and model 2 since you need two TCP sockets to send data over. But it is generally feasible.

    But loading two models on a RVC2 device could be a problem if the models are large since RVC2 is not too performant and doesn't have a lot of storage.

    Thanks,
    Jaka