Thank you very much for the quick response. I have compiled my observations and understanding based on an examination of the documentation and the source code. I would really appreciate it if you could help clarify and answer my questions. Apologies in advance for the long post.
Observation
In the ROS driver, you mentioned that the left and right image topics are being published from the StereoDepth processing path. However, according to the diagram in the DepthAI documentation (https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/stereo_depth/#StereoDepth-Inputs%20and%20Outputs), StereoDepth along with the depth/disparity output also publishes rectified and syncLeft and syncRight image frames. I have the following questions regarding this behavior:
About Sync
The Sync image frame outputs (syncedLeft and syncedRight) are published only when the i_sync parameter is enabled, which performs timestamp-based synchronization using a Sync node. Is this understanding correct?
Output Queues
While investigating the DepthAI ROS driver source code, I noticed that two separate output queues are created for the stereo cameras: one queue for publishing raw image topics and another queue for feeding images into the StereoDepth node. Specifically:
Based on this design, my understanding is that even if the StereoDepth node becomes a bottleneck, the raw camera image topics should ideally continue to be published at the full sensor rate (e.g., 30 FPS). However, this does not seem to be happening in practice. From further reading of the documentation (https://docs.luxonis.com/software/depthai-components/nodes/#Nodes-Inputs%20and%20outputs-Node%20output), it appears that this behavior may be limited by the Output message pool limitations in the driver. Is this understanding correct?
Our Goal
We are using an OAK-FFC-4P board with 4 cameras connected, running 2 StereoDepth pipelines (each using a stereo pair). The desired output is:
The raw camera images will also be used for visual odometry, so it is important that they consistently run at 30 FPS, regardless of the performance of the StereoDepth pipeline.
To achieve this, I have the following questions:
How can we ensure that the raw camera image topics from all 4 cameras always arrive at 30 FPS, even if the StereoDepth node becomes a bottleneck?
Is it possible to decouple the algorithm FPS from the camera FPS?
For example, can we run:
If this is possible, what is the recommended way to configure the pipeline or ROS driver to achieve this behavior?