Hello.
There are two opencv functions that take an alpha scaling parameter - stereRectify and getNewOptimalCameraMatrix. For stereo applications, stereRectify is the right one to use because it looks at both matrices and scales them both at the same time in a way such that we can get rid of black borders. getNewOptimalCameraMatrix works on a single matrix. I'm not sure it even makes sense to use that one for stereo matching, because I don't know if it's guaranteed that the epipolar alignment will stay valid when getNewOptimalCameraMatrix is applied separately to each of the left and right parts.
From the doc and from using the SDK, I gather that the SDK uses getNewOptimalCameraMatrix and not stereRectify for the setAlphaScaling function of the stereo depth node. This makes it hard to use that function to properly get rid of black borders, because it doesn't actually do that.
I think instead the following solution would be better:
Currently, the camera calibration only stores the intrinsic camera matrix on the device. That matrix is also used as the projection matrix. Instead, there should be a way to store both the intrinsic and a separate projection matrix on the device. Both should be used by the StereoDepth node to rectify images before doing the stereo matching. The projection matrix should be adjusted at calibration time by the stereRectify method, which should take in the alpha scaling parameter at calibration time.
Could we add this extra matrix to the device calibration and have the stereo depth node use it?
Thank you,
Vladimir Korukov