@Brandon,
Sounds really cool! 🙂
- Yes. Also, I don't think the device bobbing up and down while walking would be a show stopper. Though of course, it would be fantastic to at least significantly minimize this from happening ...
• That's because when my blind friends and I tested my current implementation, we found out that we mostly need options to be on-demand ;
• So I used a single Bluetooth button and coded something that can allow the user to select an option on demand, i.e. Detect objects / persons, detect central distance (using the sonar sensor), read text, go down each line of recognized text, request for manual visual assistance through Skype / Facebook, play multimedia like an audio book or so, etc. ; and
• So this time, I'm hinking of connecting to the device itself a tiny programmable USB or GPIO button, in order for the user to select / deactivate an on-demand option ; and
• But a GoPro chest strap, or something at the back of the DepthAI Compute Module Edition that can securely fasten it to the neckline of a user's shirt, would be awesome ...
Yes, connecting a tiny USB Bluetooth dongle is a good solution.
Yes. Audible output is quite straightforward through the Pico TTS library in Python (there are other libraries, though this is the one that my blind peers and I really like) ...
Overall, a case to hold the shutter camera pair (left and right), the center color camera, the DepthAI board, the RPi Compute Module 3+, a rechargeable LiPoly battery, the USB Bluetooth dongle, a tiny programmable USB / GPIO button / switch, and clips at the back to attach it to the neckline of a user's shirt would be really cool! 🙂