erik
Right now I import stills to colmap along with a video of me walking around the space. Once it solves I then have to try and export that into extrinsics of the devices, and it does not work well as it is not what colmap was meant to do and it is missing a degree of freedom as it does not support depth.
If we could print out different checkboards and lay them around the area and have a script that automatically finds the relative position between sensors, that would be a game changer.
It seems what you are suggesting would be even better. If we had live feeds from all the devices, and I could move a single checkboard around until we have the graph of relative positions between all the sensors. Currently, that is my #1 pain point when setting up sensors in indoor spaces, finding the relative position between sensors for accurate fusion of data.