So i've been trying to do a simple task : estimate the camera position.
What have i done? Simply set up a little example environment with an implied coordinate system, and tried to verify the results the algorithm calculated. So initially, i tried the same on a different setup and figured that the values are way off. Now the same problem, but a better verifiable environment. The algorithm finds a "good" solution in terms of error value when back-projecting the points with the estimated transition matrix, but the translation values seem to be wrong.
I think my calculation is right, the only thing i can not really verify is the camera intrinsics matrix, which i got from the calibration reader. I used the cameraRgb.socket and set the resolution to 4k. Maybe i'm reading the wrong matrix or there is some more specification on how to do that - or the values of the camera matrix provided are in different "units".
the intrinsics received from the oak-d are:
[[3096.306640625, 0.0, 1913.822509765625],
[0.0, 3096.306640625, 1084.4405517578125],
[0.0, 0.0, 1.0]]
which i understood as, fx,fy focal lengths in pixel, cx,cy offsets from principal point
K = [fx 0 cx
0 fy cy
0 0 1 ]
i read the camera calibration with :
M_rgb, width, height = calibData.getDefaultIntrinsics(dai.CameraBoardSocket.RGB)
(since it was not recalibrated i think it will be the same as getCameraIntrinsics)
Does anyone have similar issues or managed to define the pose with given points in the world coordinates system properly?
kind regards, G.