Hi @Brandon,
Cool! 🙂
• I'll take a closer look much later this week. Planning to test some sequential methods for running object detection, tracking, text detection, and text recognition. Will test OpenVINO, Pytesseract, and Microsoft OCR Read Cloud API ...
Also, I managed to successfully run BW1097 with my PiJuice HAT and 40-pin GPIO extender ribbon cable (male to female).
• However after a couple of days, PiJuice suddenly isn't able to supply enough power. Even if the PiJuice HAT is still working, based on its LEDs.
• And my Zero2Go Omini isn't working. I remember we used a capacitor to make this work with an RPi Zero WH ...
• So I'm back to using BW1097's official power supply, while waiting for the USB-A male to barrel plug that I ordered, as we discussed in Slack ...
And I'll in the next day or so share a video in Slack regarding some things I've been doing.
• This mainly involves calculating object placement in clockface locations, i.e. If x_distance and y_distance <= -0.85m, then clockface location is "7 o'clock" (Python treats negative values with higher numbers as bigger, instead of smaller, so this is >= in my code) ...
• Converting z_distance into number of standard steps, i.e. If <= 1m, then "2 steps away or less" ...
• Stringing this up with mobilenet object class, i.e. 9 o'clock. Person. 3 steps away ...
• Generating quick beeps with stereo panning to prefix TTS of combined result, i.e. Quick stereo pan from right to left if object is at the left (10, 9 and 7 o'clock), no stereo panning for center clockface locations (12 noon, dead center, 6 o'clock), etc.
• Also using fade ins / outs and 3 different pitches for these beeps, i.e. Lower pitch for 5 / 6 / 7 o'clock positions, and so on, to optimize much faster user recognition of object location ...
• Plus, using 3 different TTS voices for the same reason, i.e. One voice for object location, another for object class, and another for distance ...
Cheers! 🙂