erik Thanks Erik, that is very helpful.
I am planning on doing the point cloud creation on the host computer so its good to know that the depth image per camera would only be around 500, that gives me some room to add additional features down the line.
Per the host computer specs, normally I would totally go with the "wait and see" approach but I'm in the odd position that this test platform is actually going to be set up remotely on a research site on my behalf. So I was just hoping to have a starting off point to give them for what we may need for the host computer.
To give a little more information, we are looking at creating high fidelity point clouds of concrete 3d prints while they are being created and and saving still point clouds at time intervals for later inspection. Eventually we want to implement a trained neural network to flag issues we can classify. I've been making small point clouds using 2x OAK-D-SRs on my Mac just fine but I wasn't sure how that scales up to 4x POEs + eventual NN implementation (although this would be running on the cameras).
While I'm asking questions- since I have been working with the OAK-D SRs to create point clouds, moving from the OAK-D Pro I can definitely tell a difference in the quality of the depth map, and therefore point cloud, being generated. I know that the SRs are only passive depth but I also saw in the docs that the color cameras all have worse depth performance due to the filter. How much of an effect on quality does the color filter on an active set up have? Do you have any example images you could share comparing active/mono - active/color -passive/color? I'm wondering if I should go ahead and preorder the mono OAK-D SR-POEs so I can get a better depth map or if the decrease in quality is negligible enough as long as the IR is on to wait so we can have the added benefit of the color cameras.
Thanks for all your help!