Hello,
I've recently acquired a OAK-D camera which we will use on a moving car (like a go-pro) to record some spatial videos. Since I'm new to this technology, I've been performing many different test and experiments (mostly following the existing examples, with certain modifications). However, there are still some bad behavior that I can't figure out how to fix it. For example:
when I get the spatial location of a fixed rectangle of the screen while the camera is moving closer, some frames will show that the camera is moving away, for example:
- frame 1: average distance to rectangle is 400cm (initial)
- frame 2: average distance to rectangle is 450cm (bad)
- frame 3: average distance to rectangle is 600cm (bad)
- frame 4: average distance to rectangle is 370cm (good)
I'm guessing this has something to do with the focus of the camera, since it is moving, the auto-focus change producing some frames with bad distances. However, I have no clue on how could I fix this bad behavior.
on certain frames, the computed depth to some objects is higher than the actual distance. I've notice that these objects are usually white or very bright objects, like a white wall reflecting the sun light. On other cases, it is also produced by the shadow of the object with respect to the light source. This is a really important factor for our desired application since the moving car will always face changes in lighting, but we have no idea how to solve this.
this image shows the disparity map without using left_right check, extended disparity or subpixel, neither any other post-processing (it is only using median filter of 5X5), and yet, the closest electricity pole shows mostly 0 disparity value (the real distance is around 3 meters)
I'm really lost at this point so any help will be appreciate.
P.S. English is not my main language, sorry for typos/grammar errors