To run object detection on multiple color cameras on one device (OAK-FFC 3P or 4P), currently I need to create multiple streams in the pipeline, and each stream has a camera node and a neural network node. But this way requires a lot of resources.
Is it possible to stack the frames from multiple cameras and send the stacked image to a neural network, so I only have to create one neural network node?
Thanks!