Coral is nice. We wanted to run DeepLabv3+ on it and it wasn't yet supported, although it was bencmarked (see here.
Just re-googling it, it looks like DeepLabv3+ is now fully supported, which is great!
https://coral.withgoogle.com/models/
The main why of choosing say between Coral, Myriad, and Jetson Nano are probably summarized with:
- Fastest: Edge TPU
- Depth Perception + AI: Myriad (DepthAI)
- Most flexible (don't need to convert models, because it's just a GPU): Jetson Nano
You pay for the Jetson Nano flexibility though... it's the most expensive ($150 for the module), slowest, and highest power use.
And conversely, DepthAI (Myriad) is the least expensive, lowest power, and second fastest - while allowing a bunch of dedicated computer vision functions right from image sensors.
And the Edge TPU is the fastest, but has a disadvantage relative to both the Jetson Nano and DepthAI (Myriad), in that the data path (the video) has to flow through the main CPU, get massaged to the correct format that the Edge TPU chip needs, then go off-chip to get to neural processing (the Edge TPU chip itself) - so a lot more power/heat than needed and the CPU is burdened with this work.
Both DepthAI and the Jetson Nano have an advantage here. On DepthAI, MIPI cameras are directly connected to an imaging pipeline w/ direct memory access to neural processing - and also computer vision hardware blocks like disparity depth, harris filtering, motion estimation, etc. - so it's quite efficient. And similarly, the Jetson Nano has an efficient shared-memory setup between the CPU and GPU, so taking in image data directly from image sensors and doing neural inference is resource-efficient. And although there aren't dedicated hardware blocks, GPUs are also pretty good at this work. That's the only real disadvantage of the Edge TPU, but it still has plenty of advantages, such as being very tightly integrated with all of Google's machine learning tools, models, etc. - which are all industry-leading.