Hi @shail ,
as @OskarSonc said, yes you can run two separate models where same camera frame would go through each one of them.
run two separate models or use a multitask detection+segmentation model
It depends on the task and architecture of the models. In your case you have two different domains (defect detection and meat segmentation) which probably don't share that much features so going with one model with 2 heads probably wouldn't be the best (but you can still give it a shot). So based on my understanding of the problem I would go with 2 separate models.
detect first and then segment inside the detected can ROI?
If you run the segmentation model only on the ROI then you will most likely simplify the task for the segmentation model which could yield better performance (but note that you need to train the seg model on crops as well or use some RandomCrop augmentations). But you pay the price in latecy of the overall pipeline. Because in this setup you are running 2 models in sequence where as if they both run on same frame then you can run them in parallel.
Are there any performance or hardware considerations when doing detection + segmentation together?
Main consideration is to keep the models relatively small, especially the segmentation one since those can get quite heavy. This heavily depends also on which device you are using (RVC4 device have much more compute available) and additionally some operations may not be supported on RVC2. If you are defining the model architectures yourself I would suggest you first convert the models (e.g. using HubAI Conversion) and try them out in the pipeline without investing time into fully training them. This is mainly to quickly check if the FPS of the whole pipeline is acceptable. We also have a Benchmark module available through which you can benchmark individual models but note that FPS of the whole pipeline is then lower.
Hope this answers some of your questions but if you have something else feel free to reach out.
Klemen