So yes we have one of those. So actually that's not competition. So our system's value add to the world is to provide real-time object localization in 3D space. So this requires around 2 tera operations per second of calculations (2 TOPS). So that product is, roughly speaking, capable of (round numbers) 50 million operations per second (50 MOPS).
So DepthAI is about, round numbers, 500,000 time faster than that solution. But really they shouldn't be compared... that solution is for embedded voice recognition. (Sparkfun is working on supporting a camera, but it's still TBD, and at least for OpenMV, for running something like MobileNet, external DRAM is required - so it could prove to not be possible to run camera ML models on this device.)
It's just a different category of embedded, as well. So the purpose of that device is to be able to run on a coin-cell battery for years. Our platform is intended to run on a 7Ah battery for 10 hours. :-)
So in terms of devices that can run image/video neural models, see here: https://discuss.luxonis.com/d/4-neural-processors-and-hosts-we-ve-found-so-far
And in particular, check out Greenwaves if you're looking for super-low-power neural inference from video/image sensors. As if you want pure-embedded and very low power, this is a good solution:
Thoughts?
Thanks,
Brandon