• DepthAI-v2
  • American Sign Language Alphabet Recognition running WSL

I have the ASL recognition project from Cortic Technology Corp. running in Windows WSL.

https://github.com/cortictechnology/hand_asl_recognition

Wow! Massive leap! Over 20 hours of pain just to get it running in WSL so I can work with it...but it will be worth it!

The American Sign Language Alphabet recognizer is running on the Oak-D-Lite...

It works fantastic with several letters...less well with others...but I have an idea to make it much better.

Even with limited functionality, I can still use it immediately to communicate with Spud the Bot...AFTER I figure out how to translate the data to a form Spud can use.

The process uses a couple of processes from different projects:

the hand recognizer locates the hand
the hand pose recognizer locates the fingers
the ASL recognizer recognizes the letter from the hand

Interestingly, the pose recognizer seems VERY accurate...more accurate than the final result of recognizing the ASL letter.

I want to try teaching it to use the pose results to recognize the letter rather than using image recognition on the palm itself. I know that it is using image recognition on the palm image because the training data is a bunch of pictures of palms.

7 months later

This is cool, I see that it can not do gestures, which is reasonable.
Zack Freedman has a youtube channel where he makes a "Data glove" and has trained gesture data using Tensor flow.
In one of the related videos, he mentions a gesture cycle. Perhaps a gesture cycle could allow for j and z as well is increasing confidence in a particular static letter.

I understand the data is different, he has an IMU and a couple extra inputs for the gesture as opposed to video capture giving perhaps part of a gesture sequence.
With a large enough collection of gestures sequences (perhaps from students in an ASL school), could a model be created for a large portion of the ASL dictionary?

Looking forward, I am thinking of a necklace pendant. An Oak-1 module connected with a Teensy running a TensorFlow lite for micro or a small single board computer, battery and power management. The output as a txt to speech, or if that is too much processing, send the text out via bluetooth to a smartwatch or phone.

a year later

The American Sign Language Alphabet recognizer is running on the Oak-D-Lite... It works fantastic with several letters...— Another work proposed a recognition system for the sign language alphabet that utilizes geometrical features with an artificial neural network and This project sign language alphabet is a sign language alphabet recognizer using Python, openCV and a convolutional neural network model for classification. The goal of this project is to build a neural network which can identify the alphabet of the American Sign Language (ASL) and translate it into text and voice. Letter recognition, also known as alphabet recognition, is the ability to identify letters by name, shape, and sound. Letter naming is recognizing letter shapes and associating them with a letter name.