Hi everyone!
So our first prototype of DepthAI for Raspberry Pi was ordered last week (catching up on posts!). So it's bigger than the final version, and we'll probably clean up some routing during the later shrinking of the layout, but we wanted to get it ordered sooner rather than later to be able to discover what we've messed up sooner than later. :-) And also having stuff in-hand always makes you realize what you should have done instead. ;-)
First, the Altium 3D views:
And for the full specs:
- HDMI Connector
- CM3 Connector
- LAN9513 for 3x USB 2.0 HUB & Ethernet (10/100)
- 2x USB
- 1x USB slave input to accommodate USB boot config from CMIO (might remove later)
- 1x hard-wired USB 2.0 lane to Myriad X module
- 1x Ethernet
- Standard Raspberry Pi header
- Display connector (same as RPI 3 B+)
- Rpi Camera connector (same as RPI 3 B+)
- This is so you could use the board w/out our image sensors if your want… i.e. using it like an ‘embedded NCS2’ version.
- Sd-card slot
- Standard 3.5mm audio jack
- Three onboard cameras:
- 1 x color; 12 MP: IMX378
- 2 x grayscale for depth; 1MP each: OV09282
- BW1099 Connector (this is our Myriad X module, the BW1099)
- Raspberry Pi Compute Module connector.
Many here may be unfamiliar with the Raspberry Pi Compute Module. It allows you to 'integrate the Raspberry Pi into your design'. So it's a Raspberry Pi in a SO-DIMM form-factor:
So this is what pops into the board above to allow it to be used just like a Raspberry Pi, but harnessing the crazy power of the Myriad X (4 trillion operations per second!) to process video and run neural networks.
Any/all comments/questions/suggestions/etc. are welcome! And feel free to chat in our Hackaday.io chat room (it's a bit lonely right now!).
Thanks,
The Luxonis Team