• DepthAI
  • Luxonis DepthAI and megaAI | Overview and Status

Hi Brandon,
I just saw the announcement from the President of OpenCV yesterday. That's great news! You are doing a great work! I'm happy to see the Myriad X is much more capable than what the NCS2 can deliver. Thank you for putting effort into it.

Best regards,
Jan

Hey JanT,

Thanks a ton! Really looking forward to working with OpenCV on this to make it easy to use and wicked useful.

Thanks again,
Brandon

3 months later

Hi DepthAI fans,

So we've done SO MUCH since we last updated here. The only thing we haven't done is keep this post active.

So what have we done:

  • We delivered our Crowd Supply on time! Backers are happily using DepthAI now, and are discussing ideas on our luxonis-community.slack.com public slack group.
  • We got our first set of documentation out. https://docs.luxonis.com/
  • We made a couple new models which are available now (at https://shop.luxonis.com/) And we will have these on Crowd Supply soon.
  • We are in the process of making a power-over-ethernet version of DepthAI.
  • Our MVP Python API is running (and super fund to play with)

New Models
On the new hardware models since the Crowd Supply started. These include a USB3 Edition with onboard cameras and a USB3 Edition that's tiny and single camera (which we're calling μAI):

USB3C with Onboard Cameras (BW1098OBC):

μAI (BW1093):

Upcoming Model


This is our first engineering-development version of our PoE version of DepthAI. Some interesting new features include:

  1. A new module (the BW1099) with:

  2. Built in 128GB eMMC

  3. SD-Card interface for base-board SD-Card support

  4. PCIE support for Ethernet

  5. A reference-design carrier board with:

  6. PoE (100/1000)

  7. SD-Card

  8. 4-lane MIPI for 12MP camera (BG0249)

MVP Functionality

So the core functionality gives 3D object localization as the output from the DepthAI - with all processing done on the Myriad X - and no other hardware required. The Raspberry Pi here is used purely to show the results.

So what you can see is the person - a little person at that - at 1.903 meters away from the camera, 0.427 meters below the camera, and 0.248 meters to the right of that camera.

And you can also see the chair, which is 0.607 meters to the left of the camera, 0.45 meters below the camera, and 2.135 meters away from the camera.

And for good measure, here is our test subject walking over to the chair:

The results are returned real-time. And the video is optional. We even have a special version that outputs results over SPI for using DepthAI with microcontrollers like the MSP430. (Contact us at support@luxonis.com if this is of interest)

Cheers,
Brandon & the Luxonis Team

And here's a video view of the MVP:

a month later

Our intern went ahead and got DepthAI working natively on Mac OS X:

We’ll be writing up instructions soon. Almost all of the work is actually just setting a Mac up for Python development using Homebrew... so if your Mac is already set up for that it pretty much ‘just works’.

Meant to share this a while ago. So we have our initial online custom training for DepthAI now live on Colab.

https://colab.research.google.com/drive/1Eg-Pv7Amgc3THB6ZbnSaDJm0JAr0QPPU

So there are two notable limitations currently:

  1. DepthAI currently supports OpenVINO 2019 R3, which itself requires older versions of TensorFlow and so on. So this flow has all those old versions, which causes a lot of additional steps in Colab... a lot of uninstalling current versions of stuff and installing old versions. We are currently in the process of upgrading our DepthAI codebase to support OpenVINO 2020.1, see here. We'll release an updated training flow when that's done.
  2. The final conversion for DepthAI (to .blob) for some reason will not run on Google Colab. So it requires a local machine to do it. We're planning on just making our own server for this purpose that Google Colab can talk to to do the conversion.

To test the custom training we took some images of apples and oranges and did a terrible job labeling them and then trained and converted the network and ran it on DepthAI. It's easy to get WAY better accuracies and detection rates by using something like basic.ai to generate a larger dataset.

Cheers,
Brandon

7 days later
8 days later

We now have a more complete training flow:
https://docs.luxonis.com/tutorials/object_det_mnssv2_training/

And we used it to train a mask/no-mask model for DepthAI with a quick effort at it over the weekend:

More images of validation/testing on Google Colab here:
https://photos.app.goo.gl/FhhUCLTsm6tqBgqL8

And here's the Google Colab used to train DepthAI on mask/no-mask face detection:
https://colab.research.google.com/drive/1uY5vekGK7S6uD88d28G861SIRh9yYbjJ

Hi DepthAI Fans,

As promised, we have open sourced the DepthAI hardware!

All the carrier boards for the DepthAI System on Module (SoM), including the Altium design files and all supporting information are below:

https://github.com/luxonis/depthai-hardware


So now you can integrate the power of DepthAI into your custom prototypes and products at the board level using the DepthAI System on Module (SoM).

We can't wait to see what you build with it (and we've already seen some really cool things!).

Cheers, Brandon & the Luxonis Team

The Power over Ethernet (PoE) variant of DepthAI is starting to trickle in (after COVID19 delays)...

We now have the baseboard (which actually implements the PoE portion):

So now you can deploy DepthAI all over the place and with 328.1 feet of cable between you and the device! The power of DepthAI, with the convenience of power over ethernet deployment.

6 days later

DepthAI on Jetson Tx2. Followed the same build instructions used on Mac OS X (here) and it built w/out even a single complaint and worked first try:

14 days later

The PoE boards work great! Tested 1,000FDX over PoE (from UniFi Switch) and they work exactly as intended.

Here's DepthAI running on our Power over Ethernet prototypes:

8 days later

We launched megaAI on Crowd Supply today!


4K Video at 30FPS on a Pi, while running object detection in parallel!

Get yours now before the early bird and roadrunner specials sell out! Only 14 left!

https://www.crowdsupply.com/luxonis/megaai

17 days later

Hi DepthAI (and megaAI) fans!

So we have a couple customers who are interested in IR-only variants of the global-shutter cameras used for Depth, so we made a quick variant of DepthAI with these.

We actually just made adapter boards which plug directly into the BW1097 (here) by unplugging the existing onboard cameras. We tested with this IR flashlight here.


It's a bit hard to see, but you can tell the room is relatively dark to visible light and the IR cameras pick up the IR light quite well.

Cheers,

The Luxonis Team

More great news coming at you! We've accomplished so much so fast recently that it's hard to keep up with the updates.

Over the weekend we wrote a driver for the IMX477 used in the Raspberry Pi HQ Camera.

So now you can use the awesome new Raspberry Pi HQ camera with DepthAI FFC (here​). Below are some videos of it working right after we wrote the driver this weekend.

​​

Notice that it even worked w/ an extra long FFC cable! ​​

​More details on how to use it are here​. And remember DepthAI is open source, so you can even make your own adapter (or other DepthAI boards) from our Github here​.

And you can buy the adapter here: https://shop.luxonis.com/products/rpi-hq-camera-imx477-adapter-kit

Cheers,

Brandon & the Luxonis team

8 days later

We have a super-interesting feature-set coming to DepthAI:

  • 3D feature localization (e.g. finding facial features) in physical space
  • Parallel-inference-based 3D object localization
  • Two-stage neural inference support

And all of these are initially working (in this PR, here).

So to the details and how this works:

We are actually implementing a feature that allows you to run neural inference on either or both of the grayscale cameras.

This sort of flow is ideal for finding the 3D location of small objects, shiny objects, or objects for which disparity depth might struggle to resolve the distance (z-dimension), which is used to get the 3D position (XYZ). So this now means DepthAI can be used two modalities:

  1. As it's used now: The disparity depth results within a region of the object detector are used to re-project xyz location of the center of object.
  2. Run the neural network in parallel on both left/right grayscale cameras, and the results are used to triangulate the location of features.

An example where 2 is extremely useful is finding the xyz positions of facial landmarks, such as eyes, nose, and corners of the mouth.

Why is this useful for facial features like this? For small features like this, the risk of disparity depth having a hole in the location goes up, and even worse, for faces with glasses, the reflection of the glasses may throw the disparity depth calculation off (and in fact it might 'properly' give the depth result for the reflected object).

When running the neural network in parallel, none of these issues exist, as the network finds the eyes, nose, and mouth corners per image, and then the disparity in location of these in pixels from the right and left stream results gives the z-dimension (depth = 1/disparity), and then this is reprojected through the optics of the camera to get the full XYZ position of all of these features.

And as you can see below, it works fine even w/ my quite-reflective anti-glare glasses:

Thoughts?

Cheers,
Brandon and the Luxonis Team

15 days later
Brandon changed the title to Luxonis DepthAI and megaAI | Overview and Status .