Dear @erik
Thank you for your support and helpfulness. We ordered 12+2 cameras. Our main goal is to build a 3D scanner for the full human body with 4 colums and we plan to put 3 cameras on each colume (and we wanted to have 2 more cameras in case one of the 12 used cameras went wrong).
First we wanted to make a useable and mostly accurate pointcloud with one camera this is where we are now and have problems.
I only wanted to inform you beacuse like this maybe you have a bigger picture about our goal and can be more helpful. I have three questions.
1. Can't we calibrate the cameras with the charuco board and the calibration python script? Do you need to replace the cameras anyway?
2. I also tried the multi camera calibration and rgb pointcloud code from here: https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices/rgbd-pointcloud-fusion) I put the calibration chessboard onto the floor and I tried to scan a simple box, with two cameras, about 40° far from each other in the same height from the ground. The result was pretty great regarding the alignment, but the same problem occuerd with the depth accuracy, so the boxes were wavy. My questiun would be: If the depth accuracy problem works out and we get a good result and want to scan a whole body, is there a way to calibrate all of the 12 cameras to each other? It can be tricky and not easy because all of the cameras won't see the calibration board for sure, it is impossible. Do we need a calibration box? Or to calibrate the camerast to each other in chain somhow?
3. In my head there are two kind of calibration. One is where we calibrate each/one camera, we get the matrix with the intrinsic parameters about the focal length, etc. and this calibration can helpsű correct the distorsions too. The other calibration gives a transformation matrix which wirtes down the different positions of two or more cameras from each other? Am I right or I don't have the right picture about this?
Thank you in advance,
Szabina