• DepthAI-v2
  • Myriad X + i.MX 8M SoM | DepthAI Vision Module

Hey everyone,

So the next step on the path to CommuteGuardian is a SoM for the Myriad X and the i.MX 8M. This will allow us to drop in a full system running a modern Linux, and with the VPU power of the Myriad X, into our Commute Guardian prototype (and hopefully, final product).

And it will allow you to do the same for your application! Think of this as like the 'Pro' version of DepthAI for Raspberry Pi. As it allows it to drop directly into your prototype or product, with the wicked power of the i.MX 8M backing up the visual processing prowess of the Myriad X.

So we just started about a week ago, and here are some layout views to show progress so far:


Thoughts? Questions/ideas/comments welcome!

Cheers,
The Luxonis Team!

12 days later

So we've been continuing to 'churn the butter' on the Myriad X + i.MX 8M module we're making. Think of this as like the 'pro' version of DepthAI for Raspberry Pi. It has all the power of the Myriad X and the convenient (Ubuntu-18.04) Linux Support of the low-power i.MX 8M. (Speaking of low power, check out all it can do, here, at incredibly-low DC power use; all 4 cores maxed w/ Dhrystone is 1.5W - so low!).

Anyways, here's some more Altium-fodder on the layout. It's starting to look clean, although we still have a long ways to go:

So you can see a lot of the differential routing nearing completion, which is a bit of the slower, more tedious effort:

And the interface to the main board is over 3x 100-pin connectors (DF40C-100DP-0.4V), which are really nice and popular for this sort of thing:

On those connectors... we've discovered, after having selected them, that they're used on everything! They're even on the Google Coral SoM (here), the gyrfalcon SoM (here), and a bunch of NXP i.MX series SoMs. One of the NXP dev boards even uses them.

And it's also the same connector we're using on the Myriad X SoM we've made (and which arrives tomorrow!).

Thoughts?

Thanks,
The Luxonis Team

    Brandon changed the title to Myriad X + i.MX 8M SoM | DepthAI Vision Module .
    a month later

    Brandon Hi Brandon, first of all, I want to tell you that I am pleasantly surprised about the evolution of your project since the first time I saw it in January at Hackaday and I think you have done a great job so far. I had not rechecked its progress until a couple of days ago when I was looking for a CHEAPER and MORE energy efficient option for our own Hackaday project.

    This configuration with the NXP i.MX8 and the Myriad X VPU remember me to some already in the market product launched last year with the Antmicro folks from the Apertus / AXIOM OpenSource 4K Camera Project

    Perhaps you share some of their same goals, but with another perspective in mind and obviously different resources at hand. Even they have a device very similar to Commute Guardian but OverEngeneeried (compared with your solution) with an FPGA instead of a VPU with high-speed stereovision applied into areas such as production quality assurance and mining industry.

    Product development on the Hardware world is HARD, and even with the AntMicro perspective of being mainly a Software company with some help from a ODM that leverage the hardware design phase ( I think they work with Toradex on that side) their OpenSource Spirit and contributions with the Ecosystem, they are the least obvious contenders, but not so much in the same league.

    To finish after the above, I wanted to clarify that I have not worked for Antmicro or anything like that, only that we had thought of them for months now to develop our algorithms with its hardware kit solutions for first prototypes but I just recognized that perhaps you could be a better option for the version of Opensource Hardware that we want to publish in Hackaday after a long pause on our development on "stealth mode".

    Maybe if you could better explain the advantages and disadvantages we would be very interested in three of the kits that have to come out before you publish the campaign in Crowdsupply.

    Best regards

    Maximino Reyes

      Hi BetaMax Thanks for all the great info here (and also the kind words!). I don't know how I hadn't heard of Antmicro before, but some how I managed! They have some really neat stuff. I kind of want to meet these guys actually! So similar to them we'll be releasing our carrier boards to Github as well (it's on our docket, and actually may do it this weekend... the only reason we hadn't yet is because, well, they weren't fully tested/verified until now, but probably should have just posted w/ the caveat anyways).

      Anyways, will be up soon. I just took a look at their Github and they use Altium as well! So that makes collaboration easy. And I even saw that they have an Edge TPU carrier board design in Altium, which is nice. We're planning on making a PoE-power carrier board for the Edge TPU SoM, for our own purposes and also we figure it will be useful for others.

      It's actually kind of funny how similar our uses of tech are. So they are clearly using the Jetson series (they have nifty open-source carrier boards for it), and so are we, say for the drone stuff here, we're also working w/ the Edge TPU (making the PoE carrier for the SoM soon here), and of course the Myriad X stuff (although it looks like they are using Myriad 2) in their design. Cool to see! Also the mining application makes sense... hadn't thought of that.

      Anyways, your project looks really neat! Need to do more reading on it. Exciting!

      So for the trade-offs between say what Antmicro has made and what we're doing. They both have advantages and disadvantages.

      Antmicro's advantages:

      • Works with existing SoMs and hardware designs for the Myriad and also the i.MX 8M.
      • So this means a lot of software will work in 'plug in play' mode, increased modularity and a 'just works' factor.

      DepthAI advantages:

      • Data path is optimized. That's the main why of this approach.
      • Allows stereo and other vision processing hardware in Myriad to be used, also unloads the host a lot. On RPi host it takes CPU from 220% use to 35% use... so allows your 'application code' to have room to breathe. :-)

      The disadvantage of doing our approach is it's a TON more work (for us, not you). We have to do a bunch of custom hardware, firmware, software, etc... What this work nets (and what we needed for CommuteGuardian) is a system that's lower power, lower latency, and enables more use of the Myriad X. So it delivers a pretty-efficient Depth+AI platform which can then be embedded in products.

      The disadvantage of Antmicro's approach is just cost, latency, power - they're all going to be higher than what we're doing. BUT, it gets you a lot of the capability of what we're doing and less effort into custom firmware, hardware, etc. and is more flexible for immediate prototyping/etc. as all the standard OpenVINO stuff is default supported, etc.

      We're optimized to do one thing really efficiently: depth + AI, and specifically real-time object localization - i.e. what objects are in the field of view, where are they in pixel space, and where are they in physical space, and neural inference of those objects.

      Here's a good example: Strawberry picking (similar example probably exists in mining)
      An from DepthAI from in that application would be meta data of:

      • All strawberries in your field of view.
      • Each's position in physical space (x, y, and z), and
      • A map of each's shape (for easier grasping)
      • An estimate of each's ripeness (from output of neural model)

      Then the application on top of this can say have simple logic that commands a robotic arm to pick only strawberries above X% ripeness, and then to say sort by ripeness into different bins (to account for ripening upon shipment, and varying ship distances).

      And the host that is making that decision is free from all the computer vision tasks... it's just meta data back of strawberry location, ripeness of each strawberry, and size information. So then you could say have 1, 3, or 30 DepthAI hooked to that one host, and that single embedded host would handle it fine.

      Whereas when the video has to -flow- through the host, there's no way even an extremely capable system would keep up w/ 30 sets of 3 cameras (90 cameras total). Whereas w/ DepthAI, it's just metadata of locations... a couple KB/s coming out, so 30 sets is doable even on a Raspberry Pi host! And a single camera w/ a NCS2, for reference, running on a Raspberry Pi takes 220% CPU.

      So that's a situation where having all the image processing done before the host is important... in not all cases does that matter, but we're building DepthAI to cover the cases where it does matter.

      Similarly, with the DepthAI approach, it also allows use w/out a host at all, where you just want some action, actuator, IO, etc. to be taken as a result of the information it's getting. So it allows a much smaller, lower power, and reduced-latency solution - as it's just a Myriad X running the whole show. One example of that is CommuteGuardian (honk at the guy who's on course to run you over) and another example of that is say a Myriad X used as a disaster-recovery system, like here, where it runs the whole show, and just does the direct-command of the craft in the case where other systems have failed.

      Also, we'd be more than happy to get you some units to play with. And feel free to reach out to me at brandon at luxonis dot com. If/when we get the unit to play with... bear with us though as we're pretty early on firmware (as above, doing this approach makes it faster, smaller, and lower power - but requires a bunch of work!). We actually just today got the light-weight completely-Myriad-contained neural inference directly from the MIPI camera output working, as below:

      So we'll soon be integrated this w/ our already working depth (see here).

      Thoughts?

      Thanks again!
      -Brandon & The Luxonis Team

      10 months later

      So to close this one out... we got strong interest in not-this, so we actually ended up cutting it in lieu of other efforts back in September 2019.

      Are we still interested in this configuration for the DepthAI !!! or should I call it OAK-D now? It could actually be easier for everyone if the current kickstarter campaign could reach the "Power over Ethernet 2" Strech Goal for the OAK-D to you get enough resources to make this iMX 8M variant because is the most Cost/performant Eficient to implement the option with Sub-100ms latency with IEEE 1588 v2 support and/or TimeSensitiveNetworking (OPC/TSN).

        Thanks BetaMax for the interest here!

        So would you mind sharing what you would use this for (in general terms, don't feel like you have to spill your technical beans)? I'm wondering if any of our other ongoing hardware efforts my cover your needs here.

        Thoughts?

        Thanks,
        Brandon