Hi @jakaskerl ,
Just tried to upload an MRE, but it says Uploading files of this type is not allowed
. Can I send it to you in another way?
I did attach an (unsharp) image from our camera. The camera is hanging about 60cm above the ground, looking down on a cardboard ridge that's about 12cm high (with a few things on top). We use pretty strong led lights for lighting. This is just a simple test setup, but still somewhat similar to the real setup in our robots. The main difference is that the robots are moving, but only at a very slow pace.
Sometimes the camera focuses correctly after about 10 secs, sometimes only after a minute or so, and sometimes it never reaches a correct focus. It's rather difficult to predict.
And while I was working on this, I noticed that the camera zooms in and out quite a bit. So the camera image changes significantly, the position of objects in the camera image also changes significantly, and the position of detection bounding boxes changes significantly as well. But the camera intrinsics and distortion coefficients don't change.
That has consequences for our real world position estimates. I noticed that our pipeline actually computes different real world positions when the camera focus changes. That of course should not happen. How can we make sure that the real world positions remain constant when the camera changes focus?
Thanks in advance!