Being a crowdfunding junkie, I now have a OAK-D Wifi, OAK-D Lite AND a Looking Glass Portrait (holographic display). The Looking Glass Portrait is able to display RGB-D videos and images (i.e. RGB videos and images including depth information).

Creating an RGB-D image is trivial and all one needs to do is create a new image with double the width and with the original one + the depth map side-by-side.

For RGB-D videos I THINK the process is the same (but now per frame) so I was wondering if it would be possible to configure the OAK-D to encode a video with frames setup this way.

Failing that, a way to get both the video frames + the depth information and doing the merge locally would also work (ideally, without re-encoding with is probably not possible.

Anyone around that ever tried to do anything like that?

Thanks in advance.

  • erik replied to this.

    Hello BGA , I believe we have a demo on how to do this with Looking Glass Portrait, will ask the team to share it with you.
    Thanks, Erik

    • BGA replied to this.

      erik That would be great, thank you!

      11 days later

      Hey @erik . Did you manage to ask around about it? Any pointers to some code?

      Hello BGA, sorry about that, you should get an answer from a team member soon (today).
      Thanks, Erik

      Hello,
      Sorry for the delay.
      https://github.com/luxonis/depthai-experiments/tree/gen2_pcl_save/gen2-mega-depth-lite#oak-d-lite-usage-with-looking-glass-portrait
      Here is an example which does NN based depth estimation and the fps is low on this.

      https://github.com/luxonis/depthai-experiments/blob/gen2_pcl_save/gen2-deeplabv3_depth/main_lite.py
      Using this example uses segmentation with stereo depth on the device. to save images and then you can use it on looking glass. But we don't have the live version which can stream to Looking Glass Portrait directly yet.

      2 months later

      The demo holograms which can be downloaded here:
      https://lookingglassfactory.com/holograms/demo
      split the image in various tiles (e.g. 8x6) each one having a slightly different view of the scene. The Looking Glass portrait then splits each frame up and shows only the small part. Another way is to store the depth information on the right side. This article contains a video where yo can see it:
      https://docs.lookingglassfactory.com/3d-viewers/holoplay-studio/rgbd-photo-video
      OAK-D on the other hand stores the raw (h.264) from both cameras (+video) and you need to convert that into the Looking Glass format. With a video editor like Premiere or DaVinci it should be possible to place the depth video on the right of the color video. Will try it the following days and let you know if I get anything from that.
      Here is information about direct 3D-video encoding from OAK-D:
      https://docs.luxonis.com/en/latest/pages/faq/#how-do-i-record-or-encode-video-with-depthai
      Will also try it and see if it works...