The Myriad X is a vision processor capable of doing real time object detection and stereo depth at over 30FPS.
Let's unleash this power!
How?
We're making a Myriad X System on Module (SoM) which allows embedding the power of the Myriad X into your own products, with firmware support for 3D object detection/location.
And we're making a carrier board that includes all the cameras, the Myriad X, and the Raspberry Pi Compute Module all together to allow you to get up and running in seconds.
This allows:
- The video data path to skip the Pi, eliminating that additional latency and bottlenecking
- Stereo depth capability of the Myriad X for 3D object localization
- Significant reduction in the CPU load on the Raspberry Pi
So you, the Python programmer, now have real-time 3D position of all the objects around - on an embedded platform - and backed by the power of the Raspberry Pi Community!
For the full back story, let's start with the why:
- There’s an epidemic in the US of injuries and deaths of people who ride bikes
- Majority of cases are distracted driving caused by smart phones (social media, texting, e-mailing, etc.)
- We set out to try to make people safer on bicycles in the US
- We’re technologists
- Focused on AI/ML/Embedded
- So we’re seeing if we can make a technology solution
Making People Who Ride Bikes Safer
(If you'd like to read more about CommuteGuardian, see here)
DepthAI Platform
- In prototyping the Commute Guardian, we realized how powerful the combination of Depth and AI is.
- And we realized that no such embedded platform existed
- So our milestone on the path to CommuteGuardian is to build this platform – and sell it as a standard product.
- We’re building it for the Raspberry Pi (Compute Module)
- Human-level perception on the world’s most popular platform
- Adrian’s PyImageSearch Raspberry Pi Computer Vision Kickstarter sold out in 10 seconds – validating demand for Computer Vision on the Pi (that, and validating that Adrian is AWESOME!)
So below is a rendering of our first prototype of DepthAI for Raspberry Pi.
The key difference between this, and say using the Raspberry Pi with an NCS2 is the data path. With the NCS2 approach all the video/image data has to flow through (and be resized/etc.) by the host, whereas in this system the video data goes directly to the Myriad X, as below, unburdening the host from these tasks - which increases frame-rate and drastically reduces latency:
Development Steps
The first thing we made was a dev board for ourselves. The Myriad X is a complicated chip, with a ton of useful functionality... so we wanted a board where we could explore this easily, try out different image sensors, etc. Here's what that looks like:
BW0235
We made the board with modular camera boards so we could easily test out new image sensors w/out the complexity of spinning a new board. So we'll continue to use this as we try out new image sensors and camera modules.
While waiting on our development boards to be fabricated, populated, etc. we brainstormed how to keep costs down (working w/ fine-pitch BGAs that necessitate laser vias means prototypes are EXPENSIVE), while still allowing easy experimentation w/ various form-factors, on/off-board cameras, etc. We landed on making ourselves a Myriad X System on Module, which is the board w/ all the crazy laser vias, stacked vias, and over all High-Density-Integration (HDI) board stuff that makes them expensive. This way, we figure, we can use this as the core of any Myriad X designs we do, without having to constantly prototype w/ expensive boards.
BW1099
We exposed all that we needed for our end-goal of 3D object detection (i.e. MobileNet-SSD object detection + 3D reprojection off of stereo depth data). So that meant exposing a single 4-lane MIPI for handling high-res (e.g. 12MP) color camera sensors and 2x 2-lane MIPI for cameras such as 1MP global-shutter image sensors for depth.
And we threw a couple other interfaces, boot methods, etc. on there for good measure, which are default de-pop to save cost when not needed, and can be populated if needed.
So of course in making a module, you also need to make a board on which to test the module. So in parallel to making the SoM, we started attacking a basic breakout carrier board:
It's basic, but pulls out all the important interfaces, and works with the same modular camera-board system as our development board. So it's to some degree our 'development board lite'.
And once we got both of these ordered, we turned our attention to what we set out to build, for you, the DepthAI for Raspberry Pi system. And here it is, in all it's Altium-rendered glory:
So what does this thing do? The key bit is it's completely self-contained. If you need to give something autonomy, you don't need anything more than this. It has the vision accelerator (Myriad X), all the cameras, all the connectivity, and interfaces you need - and the Raspberry Pi Compute Module on-board.
So it's a self-contained system, allowing you to write some simple Python to solve problems that 3 years ago were not yet solvable by humanity! And now you can do it all on this one system.
To visualize what this gives you, see below, noting that DepthAI will give over 30FPS instead of the 3FPS in this example:
And while we are still working to integrate object detection and depth data together directly on the Myriad X, as well as tweaking our depth filtering, here's an example of depth alone running on our platform at 30FPS on DepthAI for Raspberry Pi (see here for more details):
Cheers,
The Luxonis Team