J
jimdinunzio

  • a day ago
  • Joined 7 days ago
  • 0 best answers
  • Hi, I have an OAK-D Lite and have built a printables project, RPI + OAK-D Lite 3D Camera (https://www.printables.com/model/196422-raspberry-pi-4b-oak-d-lite-portable-3d-camera), to output RGB-D Side by Side images for importing into my Looking Glass Portrait Holographic displays.

    The author of the project used depthai experiment, gen2-mega-depth (luxonis/depthai-experimentstree/master/gen2-mega-depth). I tried it briefly but was not satisfied and assumed that the OAK-D lite was chosen for its ability to compute disparity or depth from stereo cameras.

    Then I combined 3 example scripts from depthai examples including depth alignment to color camera, software time synchronization between depth and color, and postprocessing to filter the output for a smoother fewer gaps and higher quality depthmap.

    I hit an apparent bug trying to get disparity frames while using post processing without subpixel mode enabled. (https://discuss.luxonis.com/d/5894-problem-getting-stereo-postprocessing-to-work-with-depthalign). But it turns out I probably want subpixel mode on anyway.

    Then I was not satisfied with the results from importing the depth map into Looking Glass. The maps of objects within one meter had too low depth resolution even at 16bits. It seemed to have discreet layers and background seemed to have more resolution. I also had to use inverse setting in Looking Glass.

    Earlier I had noticed that Looking Glass accepted disparity maps as well in the RGB+D images. after Few hours of frustration futzing with settings, I gave up with depth maps and switched to disparity maps. I use depth maps for my robotic project so I know how far away something is, but this application is for a 3D picture, and it's more about relative distance between foreground and background anyway. I shift RGB up to 16 bit to match disparity map and output 16bit png files, and the output on the Looking Glass was better with more apparent depth levels. Here's an example:

    The last challenge is still ongoing and why I am asking for help. Using the depthai post processing improves the quality of the map considerably, but there are still defects in the form of black holes or lack of good edge detection for foreground objects. I'm using spatialFilter with hole filling, but it can be hard to get an ideal setting.

    Here's my latest code:

    jimdinunzio/depthai-pythonblob/main/examples/StereoDepth/rgbd_camera.py

    I'd appreciate any advice or pointers on how to improve the picture quality. I submitted one to OpenAI ChatGPT and asked it to smooth the disparity, etc., and it did make some improvements, but I'd prefer a more practical solution.

    Thanks.

    • jakaskerl Thanks, linking depth works. I needed to use actual depth anyway for the RGB-D.
      However, I sort of still want disparity for visualization. I tried to visualize actual depth and haven't found a good way. I also printed the type of the disp ImgFrame and it is always Type.RAW16
      property, but based on the docs below the actual data is different depending on disparity modes. I turned on subpixel mode which uses RAW16 internally, and the error is gone.
      This seems like a bug. For disparity modes encoding RAW8, the ImgFrame type should be RAW8 and not RAW16 and you would not be forced to use subpixel mode just to get it working. Am I missing something?

      From ImgFrame class Docs:
      depth

      Outputs ImgFrame message that carries RAW16 encoded (0..65535) depth data in depth units (millimeter by default). Non-determined / invalid depth values are set to 0

      disparity

      Outputs ImgFrame message that carries RAW8 / RAW16 encoded disparity data: RAW8 encoded (0..95) for standard mode; RAW8 encoded (0..190) for extended disparity mode; RAW16 encoded for subpixel disparity mode: - 0..760 for 3 fractional bits (by default) - 0..1520 for 4 fractional bits - 0..3040 for 5 fractional bits

      • Hi,

        I have an OAK-D Lite and want to output RGB-D Side by Side images for import into Looking Glass Holographic displays. I have combined multiple DepthAI examples includingr depth alignment to color camera, software time synchronization between depth and color, and postprocessing to filter the output for a smoother fewer gaps and higher quality depthmap.

        However, I cannot get any of the post processing filters to work with Depth Alignment. As soon as I enable one filter I get this error at the first frame:

        frame = msg.getFrame()

        RuntimeError: ImgFrame doesn't have enough data to encode specified frame, required 4147200, actual 2073600. Maybe metadataOnly transfer was made?

        I tried a lot of things, but the only think that works is disabling either postprocessing or depth alignment. The only other thing I tried that does not get this error is changing the setDepthAlign to DepthAlign.CENTER instead of the CameraBoardSocket.CAM_A. However, this results in the resolutions being different and not aligned. I will try to manually crop and scale the Depth map to the same as color and see if it is aligned.

        Code is here: jimdinunzio/depthai-pythonblob/main/examples/StereoDepth/rgbd_camera.py

        Any help would be appreciated.

        Thanks.