I'm using a slightly modified Python script from the Luxonis examples for a Raspberry Pi connected to an Oak-D Lite.
The example uses the AI single image depth process to create an RGBD depth map.
If I'm not mistaken, this only uses the RGB camera and an AI algorithm to generate a depth map.
Why would Luxonis use this as its specified example for creating an RGBD image?
Why do you even need a 3 camera Depth system like Oak-D Lite for this?
What I'd like to do is use the mono L+R cameras to create a greyscale (Black is Back) depth map and align it with a suitably cropped RGB camera image, and output a standard RGBD image.
I've looked through the examples, and I cannot find where this might be posted on the Luxonis Raspberry Pi download.
What appears to be the technique in OpenCV for this is the StereoBM module.
Are there any examples of this in the Oak-D files? Haven't found it yet.
Any insights would be appreciated.