Unfortunately I can't share the pics. However, le me try to give you a better overall description.
Training
Model is trained on wide pictures with Alpha parameter 0 and rgb-depth alignment. The images are 320x240.
First Fix for Not-Wide lens
- Read image from the camera and resize it to 320x240 => img
- Create a 4:3 1280x960 black image => mask
- overlap img and mask => mask_overlap
- resize overlap_mask to 320x240
- Input to the model
This way, the model seems to be working better but I'm wondering if there are better ways to fix.
The resolution of the mask (point 3) has been found empirically by comparing the size of the objects in both the wide and not wide images.