• DepthAI
  • Luxonis DepthAI and megaAI | Overview and Status

The Myriad X is a vision processor capable of doing real time object detection and stereo depth at over 30FPS.

Let's unleash this power!

How?
We're making a Myriad X System on Module (SoM) which allows embedding the power of the Myriad X into your own products, with firmware support for 3D object detection/location.

And we're making a carrier board that includes all the cameras, the Myriad X, and the Raspberry Pi Compute Module all together to allow you to get up and running in seconds.

This allows:

  1. The video data path to skip the Pi, eliminating that additional latency and bottlenecking
  2. Stereo depth capability of the Myriad X for 3D object localization
  3. Significant reduction in the CPU load on the Raspberry Pi

So you, the Python programmer, now have real-time 3D position of all the objects around - on an embedded platform - and backed by the power of the Raspberry Pi Community!

For the full back story, let's start with the why:

  • There’s an epidemic in the US of injuries and deaths of people who ride bikes
  • Majority of cases are distracted driving caused by smart phones (social media, texting, e-mailing, etc.)
  • We set out to try to make people safer on bicycles in the US
    • We’re technologists
    • Focused on AI/ML/Embedded
    • So we’re seeing if we can make a technology solution

Making People Who Ride Bikes Safer


(If you'd like to read more about CommuteGuardian, see here)

DepthAI Platform

  • In prototyping the Commute Guardian, we realized how powerful the combination of Depth and AI is.
  • And we realized that no such embedded platform existed
  • So our milestone on the path to CommuteGuardian is to build this platform – and sell it as a standard product.
  • We’re building it for the Raspberry Pi (Compute Module)
    • Human-level perception on the world’s most popular platform
    • Adrian’s PyImageSearch Raspberry Pi Computer Vision Kickstarter sold out in 10 seconds – validating demand for Computer Vision on the Pi (that, and validating that Adrian is AWESOME!)

So below is a rendering of our first prototype of DepthAI for Raspberry Pi.

The key difference between this, and say using the Raspberry Pi with an NCS2 is the data path. With the NCS2 approach all the video/image data has to flow through (and be resized/etc.) by the host, whereas in this system the video data goes directly to the Myriad X, as below, unburdening the host from these tasks - which increases frame-rate and drastically reduces latency:

Development Steps

The first thing we made was a dev board for ourselves. The Myriad X is a complicated chip, with a ton of useful functionality... so we wanted a board where we could explore this easily, try out different image sensors, etc. Here's what that looks like:

BW0235

We made the board with modular camera boards so we could easily test out new image sensors w/out the complexity of spinning a new board. So we'll continue to use this as we try out new image sensors and camera modules.

While waiting on our development boards to be fabricated, populated, etc. we brainstormed how to keep costs down (working w/ fine-pitch BGAs that necessitate laser vias means prototypes are EXPENSIVE), while still allowing easy experimentation w/ various form-factors, on/off-board cameras, etc. We landed on making ourselves a Myriad X System on Module, which is the board w/ all the crazy laser vias, stacked vias, and over all High-Density-Integration (HDI) board stuff that makes them expensive. This way, we figure, we can use this as the core of any Myriad X designs we do, without having to constantly prototype w/ expensive boards.

BW1099

We exposed all that we needed for our end-goal of 3D object detection (i.e. MobileNet-SSD object detection + 3D reprojection off of stereo depth data). So that meant exposing a single 4-lane MIPI for handling high-res (e.g. 12MP) color camera sensors and 2x 2-lane MIPI for cameras such as 1MP global-shutter image sensors for depth.

And we threw a couple other interfaces, boot methods, etc. on there for good measure, which are default de-pop to save cost when not needed, and can be populated if needed.

So of course in making a module, you also need to make a board on which to test the module. So in parallel to making the SoM, we started attacking a basic breakout carrier board:

It's basic, but pulls out all the important interfaces, and works with the same modular camera-board system as our development board. So it's to some degree our 'development board lite'.

And once we got both of these ordered, we turned our attention to what we set out to build, for you, the DepthAI for Raspberry Pi system. And here it is, in all it's Altium-rendered glory:

So what does this thing do? The key bit is it's completely self-contained. If you need to give something autonomy, you don't need anything more than this. It has the vision accelerator (Myriad X), all the cameras, all the connectivity, and interfaces you need - and the Raspberry Pi Compute Module on-board.

So it's a self-contained system, allowing you to write some simple Python to solve problems that 3 years ago were not yet solvable by humanity! And now you can do it all on this one system.

To visualize what this gives you, see below, noting that DepthAI will give over 30FPS instead of the 3FPS in this example:

And while we are still working to integrate object detection and depth data together directly on the Myriad X, as well as tweaking our depth filtering, here's an example of depth alone running on our platform at 30FPS on DepthAI for Raspberry Pi (see here for more details):

Cheers,
The Luxonis Team

Brandon changed the title to Luxonis Depth AI | Overview and Status .
Brandon stickied the discussion .
Brandon changed the title to Luxonis DepthAI for Raspberry Pi | Overview and Status .
18 days later

Hi everyone,

So the first prototypes of the DepthAI for Raspberry Pi just finished population late last night, and shipped to us this morning, likely to arrive this week!

Some images below:

It's going to be a busy week of testing, as we're getting these, and also the Myriad X modules one day apart.

On this board our test plan is to exercise all the standard Raspberry Pi features and IO (as it's supposed to act as a standard Raspberry Pi, minus WiFi for this revision), take notes on any errors/etc. for fixing in Altium, and then if all is OK-enough, connect the Myriad X module to it, and see if the whole thing runs together (with the code we have working so far).

Any comments or questions? Feel free to drop them here or shoot an email to brandon at luxonis dot com.

Best,
The Luxonis Team

Hi again!

So our first Myriad X modules finally shipped! So we're expecting to have these in-hand tomorrow, Tuesday August 20th.

So back-story on these is that we ordered them on June 26th w/ 3-week turn from MacroFab (who we really like, and have used a lot), and the order was unlucky enough to fall in w/ 2 other orders that were subject to a bug in MacroFab's automation (which is super-impressive, by the way).

So what was the bug? (You may ask!)

Well, their front-end and part of the backend were successfully initializing all the correct actions (e.g. components order, bare PCB order, scheduling of assembly) at the correct times. However, the second half of the backend was apparently piping these commands straight to dev/null, meaning that despite the system showing and thinking that all the right things were being done, nothing was actually happening.

So on July 15th, when the order was supposed to ship, and despite our every-2-day prodding up until then, it was finally discovered that the automation had done nothing, at all. So then this was debugged, the actual status was discovered, and the boards were actually started around July 22nd.

Fast-forward to now, and this 3-week order is now a 8-week order - which should arrive tomorrow!

Unfortunately, the only photo we got of the units, was one from a confirmation that the JTAG connector was populated in the right orientation (and it was), so here's a reminder of what the module looks like, rendering in Altium:

And for good measure, the only photo we have of the boards so far, which is of the JTAG connector:

So hopefully tomorrow we'll have working modules! And either way we'll have photos to share.

Cheers!

The Luxonis Team

15 days later

Woefully behind on updates. We got the BW1099 in, and they work, and we got the BW1097 in, and they work! Pictures only because out of time these days with all the exciting hardware to play with and write code for!


2 months later

Good job! Please, which accelerator on Myriad X do you use for the Stereo Depth implementation? Do you use the new "Stereo depth block" mentioned on this webpage: https://www.movidius.com/myriadx or do you use the SHAVE cores only?
Thank you!
Jan

    Hi JanT

    Thanks!

    Yes we're using the stereo depth block and not the SHAVEs for the stereo implementation. This leaves the SHAVEs free for neural processing and also filtering on the depth, etc.:

    A bit more information is here:
    https://github.com/Luxonis-Brandon/DepthAI

    Thanks again,
    Brandon

    Hi Brandon! This project looks amazing. What is the frame rate to be expected for depth camera + AI usage? In the sample video it looked like the framerate was around 2 or 3 fps. My project would need a higher rate.

      GOB Thanks and good question GOB . So we're expecting 25 frames per second of depth and AI operating at the same time. We were expecting to have a demo of this for the launch, but don't have both stacks working correctly yet - so we used our slower, prototype video for now, which is yes slower at 2-3 frames per second.

      • GOB replied to this.

        Brandon Thanks for the update! If it's okay, can I email you with some more questions I have?

        Yes for sure. Brandon at luxonis dot com

        16 days later

        Thank you for your answer, Brandon!
        I have an additional question. I'm sorry if it's already explained somewhere else.
        Which libraries do you use for the Stereo block and SHAVE cores programming? You mention you are working on some stack to put together the Depth processing and AI detection. I would expect there is some stack or framework delivered from Intel.. I know there is the OpenVINO, but it's for the AI block only..
        So, if I summarize my question, do you use enablement (drivers, libs, stacks) from Intel, or do you need to write everything from scratch by yourself?

        Thank you!
        Jan

        Hi JanT,

        We’re writing our own system, based on an architecture we put together and have been implementing for a while now. This is composed of a binary (custom, that we make) that runs in the Myriad X, a library that runs on the host OS (again, that we make) that works with OpenVINO cleanly and has additional open source Python code which exposes the additional functionality that does exist yet in OpenVINO.

        We are also planning on working with OpenCV and OpenVINO to integrate this functionality directly into both.

        Expect some announcements soon (and I think some are already out, from the President of OpenCV for example).

        Thoughts?

        Thanks,
        Brandon

        Hi Brandon,
        I just saw the announcement from the President of OpenCV yesterday. That's great news! You are doing a great work! I'm happy to see the Myriad X is much more capable than what the NCS2 can deliver. Thank you for putting effort into it.

        Best regards,
        Jan

        Hey JanT,

        Thanks a ton! Really looking forward to working with OpenCV on this to make it easy to use and wicked useful.

        Thanks again,
        Brandon

        3 months later

        Hi DepthAI fans,

        So we've done SO MUCH since we last updated here. The only thing we haven't done is keep this post active.

        So what have we done:

        • We delivered our Crowd Supply on time! Backers are happily using DepthAI now, and are discussing ideas on our luxonis-community.slack.com public slack group.
        • We got our first set of documentation out. https://docs.luxonis.com/
        • We made a couple new models which are available now (at https://shop.luxonis.com/) And we will have these on Crowd Supply soon.
        • We are in the process of making a power-over-ethernet version of DepthAI.
        • Our MVP Python API is running (and super fund to play with)

        New Models
        On the new hardware models since the Crowd Supply started. These include a USB3 Edition with onboard cameras and a USB3 Edition that's tiny and single camera (which we're calling μAI):

        USB3C with Onboard Cameras (BW1098OBC):

        μAI (BW1093):

        Upcoming Model


        This is our first engineering-development version of our PoE version of DepthAI. Some interesting new features include:

        1. A new module (the BW1099) with:

        2. Built in 128GB eMMC

        3. SD-Card interface for base-board SD-Card support

        4. PCIE support for Ethernet

        5. A reference-design carrier board with:

        6. PoE (100/1000)

        7. SD-Card

        8. 4-lane MIPI for 12MP camera (BG0249)

        MVP Functionality

        So the core functionality gives 3D object localization as the output from the DepthAI - with all processing done on the Myriad X - and no other hardware required. The Raspberry Pi here is used purely to show the results.

        So what you can see is the person - a little person at that - at 1.903 meters away from the camera, 0.427 meters below the camera, and 0.248 meters to the right of that camera.

        And you can also see the chair, which is 0.607 meters to the left of the camera, 0.45 meters below the camera, and 2.135 meters away from the camera.

        And for good measure, here is our test subject walking over to the chair:

        The results are returned real-time. And the video is optional. We even have a special version that outputs results over SPI for using DepthAI with microcontrollers like the MSP430. (Contact us at support@luxonis.com if this is of interest)

        Cheers,
        Brandon & the Luxonis Team