Hi
I am trying to get an accurate position from my OAK-D Lite with the C API. I set the RGB camera up with:
`auto calibData = device.device.readCalibration2();`
auto lensPosition = calibData.getLensPosition(dai::CameraBoardSocket::RGB);
if(lensPosition) {
camera.initialControl.setManualFocus(lensPosition);
and mono cameras:
camera->setResolution(dai::MonoCameraProperties::SensorResolution::THE_480_P);
and stereo:
stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
// LR-check is required for depth alignment
stereo->setLeftRightCheck(true);
stereo->setExtendedDisparity(true);
stereo->setOutputSize(640,480);
stereo->setDepthAlign(dai::CameraBoardSocket::RGB);
Everything seems to work nicely - the depth and RGB look aligned and the depth image has sensible values.
However, there is an offset and scale in the depth values. I measured the distance from the glass of the camera to an object, and fitted a curve against the camera depth. I get
measured depth = 1.16 * actual depth - 45mm
I was expecting some kind of virtual focal point offset perhaps, but the scale makes the depths 16% bigger than expected. Am I missing some calibration data? Or maybe the mono-camera spacing is from a different model?
The results are very consistent, so I can actually compensate for the shift, but I would like to know the right way to do this.