So DepthAI's purpose is to allow real-time perception and interaction with the physical world. So it provides per-frame results of what objects are and where they are in physical space.
So unlike almost all stereo cameras, DepthAI is not intended for algorithmically generating 3D maps - doing algorithmic simultaneous location and mapping (SLAM) - or generating 3D models of objects. Such is a more traditional use of stereo cameras, and is largely all that these could be used for before the advent of being able to run neural inference directly in the camera. So here is a quick summary of cameras if you are looking to do simply stereo vision and/or SLAM and/or 3D recreation of objects:
- Occipital, here
- Stereolabs, here
- Xvisio, here
- Intel T265, here, Intel D435, Intel SR305 here, Intel D455
- Kudan, here
- MyntAI, here
- Azure Kinect DK, here
- Duo3D, here
- QooCam (more consumer-facing), here
- ArduCam stereo pair for Jetson Nano, here
- PiEyeCam ToF, here
- WithRobot, here
- ncam, here
So these stereo cameras could be combined with a host computer and an AI processor to accomplish the same thing that DepthAI does all in one camera.
The below figure summarizes it:
So for applications that need depth sensing only (and not AI or high-res RGB), the above solutions are probably of more interest.
Where DepthAI is applicable is where Spatial AI is needed in a small, integrated unit. So to enable things to real-time interact with the world like a human would - in a small, modular, open-source eco-system.
And... there is application to use the onboard AI processing in DepthAI to make AWESOME depth (using AI to fill in disparity gaps, improve dense depth) and SLAM results (using AI to better track features, etc.)... but this is a topic of future exploration.