This demo is with an Intel RealSense D435 + Raspberry Pi 3B + NCS1.
It's doing MobileNet-SSD Object Detection and depth-data projection to give XYZ position of every pixel. And we're printing the XYZ of the middle-pixel of bounding boxes in the bounding box label (hence with the chair, it changes when I walk behind it, because the center-pixel is actually the wall behind the chair in its initial orientation). All other pixels' XYZ are available per frame, so you can use the ones most pertinent, average over an area, etc. And in the case of the Commute Guardian, the XYZ location of the edge of the vehicle is used for impact prediction.
We're working to make a board which leverages the Myriad X to do the depth calculation (and de-warp/etc.) directly while also doing the neural network side (the object detection). This should take the whole system from 3FPS to 30FPS, while reducing cost.
We're simply releasing our work, before the final bike product is out because we realized that the board itself (particularly with the Raspberry Pi as the brain) would be super useful for a bunch of engineers across a variety of project types.