Been a long time since I updated the effort here. We're getting WAY closer to having the embedded platform to actually build a productized version of Commute Guardian.
See some tantalizing pictures below (of DepthAI Onboard Camera Edition, here:


And one of our users (thanks, Martin!) even made a mount for the Raspberry Pi Compute Module Edition here with a battery holder such that the entire solution can be mounted to a bike post, like below. And lights/horns could be mounted separately.

We're getting very close on DepthAI to being able to productize to make a smart bike light like described above.
One last thing is we need to do hard-sync between depth and AI results to allow quite-fast-moving objects to still be tracked in 3D space accurately, particularly when there is extreme lateral motion (i.e. side impact instead of from behind). But we're quite close.
See below to see how quickly this now tracks (in this case, faces, instead of cars, but it's similar):
You can buy the DepthAI platform from our store, on CrowdSupply and on Mouser
And if you want to build something off of it, do it! We've open sourced all hardware and software:
Hardware: https://github.com/luxonis/depthai-hardware
Software:
And we have documentation on how to use all the software here:
And even our documentation is open-source, so if you find an error you can do a PR w/ the fix!
We even have open-source (and free) training with tutorials (here:
The Tutorials
The below tutorials are based on MobileNetv2-SSD, which is a decent-performance, decent-framework object dectector which natively runs on DepthAI. A bunch of other object detectors could be trained/supported on Colab and run on DepthAI, so if you have a request for a different object detector/network backend, please feel free to make a Github Issue!
Easy Object Detector Training 
The tutorial notebook
Easy_Object_Detection_With_Custom_Data_Demo_Training.ipynb shows how to quickly train an object detector based on the Mobilenet SSDv2 network.
After training is complete, it also converts the model to a .blob file that runs on our DepthAI platform and modules. First the model is converted to a format usable by OpenVINO called Intermediate Representation, or IR. The IR model is then compiled to a .blob file using a server we set up for that purpose. (The IR model can also be converted locally to a blob.)
And that's it, in less than a couple of hours a fairly advanced proof of concept object detector can run on DepthAI to detect objects of your choice and their associated spatial information (i.e. xyz location). For example this notebook was used to train DepthAI to locate strawberries in 3D space, see below:

COVID-19 Mask/No-Mask Training 
The Medical Mask Detection Demo Training.ipynb training notebook shows another example of a more complex object detector. The training data set consists of people wearing or not wearing masks for viral protection. There are almost 700 pictures with approximately 3600 bounding box annotations. The images are complex: they vary quite a lot in scale and composition. Nonetheless, the object detector does quite a good job with this relatively small dataset for such a task. Again, training takes around 2 hours. Depending on which GPU the Colab lottery assigns to the notebook instance, training 10k steps can take 2.5 hours or 1.5 hours. Either way, a short period for such a good quality proof of concept for such a difficult task.
We then performed the steps above for converting to blob and then running it on our DepthAI module.
Below is a quick test of the model produced with this notebook on Luxonis megaAI:

Cheers,
Brandon and the Luxonis Team!