I appreciate the response. So what I am understanding is this:
1) Do calibration for both. Maybe set camera A as the world view and then calculate extrinsics for Camera A relative to Camera A's world view and Camera B's relative to Camera A's world view
2) Instantiate two different pointcloud pipeline streams. Camera A and Camera B
3) Next with our calibration matrices, transform the pointcloud into x, y, z per frame for both pointclouds.
4) Display on Open3D
The only question I have is how do I time sync both frames?