I have the ASL recognition project from Cortic Technology Corp. running in Windows WSL.
https://github.com/cortictechnology/hand_asl_recognition
Wow! Massive leap! Over 20 hours of pain just to get it running in WSL so I can work with it...but it will be worth it!
The American Sign Language Alphabet recognizer is running on the Oak-D-Lite...
It works fantastic with several letters...less well with others...but I have an idea to make it much better.
Even with limited functionality, I can still use it immediately to communicate with Spud the Bot...AFTER I figure out how to translate the data to a form Spud can use.
The process uses a couple of processes from different projects:
the hand recognizer locates the hand
the hand pose recognizer locates the fingers
the ASL recognizer recognizes the letter from the hand
Interestingly, the pose recognizer seems VERY accurate...more accurate than the final result of recognizing the ASL letter.
I want to try teaching it to use the pose results to recognize the letter rather than using image recognition on the palm itself. I know that it is using image recognition on the palm image because the training data is a bunch of pictures of palms.
![](https://i.imgur.com/TiwCEsv.png)
![](https://i.imgur.com/pFXblx3.png)