• Community
  • How to get the cleanest possible disparity map for RGB+D images

Hi, I have an OAK-D Lite and have built a printables project, RPI + OAK-D Lite 3D Camera (https://www.printables.com/model/196422-raspberry-pi-4b-oak-d-lite-portable-3d-camera), to output RGB-D Side by Side images for importing into my Looking Glass Portrait Holographic displays.

The author of the project used depthai experiment, gen2-mega-depth (luxonis/depthai-experimentstree/master/gen2-mega-depth). I tried it briefly but was not satisfied and assumed that the OAK-D lite was chosen for its ability to compute disparity or depth from stereo cameras.

Then I combined 3 example scripts from depthai examples including depth alignment to color camera, software time synchronization between depth and color, and postprocessing to filter the output for a smoother fewer gaps and higher quality depthmap.

I hit an apparent bug trying to get disparity frames while using post processing without subpixel mode enabled. (https://discuss.luxonis.com/d/5894-problem-getting-stereo-postprocessing-to-work-with-depthalign). But it turns out I probably want subpixel mode on anyway.

Then I was not satisfied with the results from importing the depth map into Looking Glass. The maps of objects within one meter had too low depth resolution even at 16bits. It seemed to have discreet layers and background seemed to have more resolution. I also had to use inverse setting in Looking Glass.

Earlier I had noticed that Looking Glass accepted disparity maps as well in the RGB+D images. after Few hours of frustration futzing with settings, I gave up with depth maps and switched to disparity maps. I use depth maps for my robotic project so I know how far away something is, but this application is for a 3D picture, and it's more about relative distance between foreground and background anyway. I shift RGB up to 16 bit to match disparity map and output 16bit png files, and the output on the Looking Glass was better with more apparent depth levels. Here's an example:

The last challenge is still ongoing and why I am asking for help. Using the depthai post processing improves the quality of the map considerably, but there are still defects in the form of black holes or lack of good edge detection for foreground objects. I'm using spatialFilter with hole filling, but it can be hard to get an ideal setting.

Here's my latest code:

jimdinunzio/depthai-pythonblob/main/examples/StereoDepth/rgbd_camera.py

I'd appreciate any advice or pointers on how to improve the picture quality. I submitted one to OpenAI ChatGPT and asked it to smooth the disparity, etc., and it did make some improvements, but I'd prefer a more practical solution.

Thanks.