Bridging the gap

New technology at Texas A&M could enable smart  devices to improve lines of communication by recognizing and interpreting sign language.

While close to 500,000 Americans are deaf and use American Sign Language, far fewer people who are not deaf are sign language literate. New wearable smart technology could soon bridge the communication gap between people who are deaf and those who don't know sign language.

A brainchild of Roozbeh Jafari, associate professor in the university's Department of Biomedical Engineering, the prototype smart device features two sensors that are strapped to a wearer's right wrist. The sensors measure hand and arm movement as well as muscle activity and send that data via wireless Bluetooth® technology to an external laptop. Complex algorithms interpret the sign and display the correct English word for the gesture.

"Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy," notes Jafari.

He presented his research at the Institute of Electrical and Electronics Engineers 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.

Exploring the tenets of wearability

The technology, developed in collaboration with Texas Instruments, represents a growing interest in the creation of high-tech sign language recognition systems (SLRs). But unlike other recent initiatives, Jafari's system forgoes the use of a camera to capture gestures. 

"Video-based recognition can suffer performance issues in poor lighting conditions, and the videos or images captured may be considered invasive to the user's privacy," he explains. 

What's more, because these systems require a user to gesture in front of a camera, they have limited wearability – and wearability, for Jafari, is key.

"Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body," Jafari says. "Because they are attached to our body, wearables can be used to gather data about us and provide valuable feedback."

To enhance the wearability of the device, Jafari is working on refining its components to a point where it would be indistinguishable from a watch. The technology currently uses an accelerometer and gyroscope combined as an inertial sensor. This sensor responds to the wearer's hand orientations and hand and arm movements during a gesture to discriminate between different signs. It works in tandem with a surface electromyography sensor (sEMG) that measures muscle activity.

"Certain signs in American Sign Language are similar in terms of the gestures required to convey the word. With these gestures, the overall movement of the hand may be the same for two different signs, but the movement of individual fingers may be different," he explains. 

For example, the respective gestures for "please" and "sorry" and "name" and "work" are similar in hand motion. The sEMG distinguishes hand and finger movements for a more accurate interpretation of the sign.

Jafari envisions the device collecting the data produced from a gesture, interpreting it and then sending the corresponding English word to another person's smart device so that he or she can understand what is being signed. In addition, he is working to increase the number of signs recognized by the system and expanding the system to both hands.

"The combination of muscle activation detection with motion sensors, coupled with other applications, is a new and exciting way to understand human intent," says Jafari. "In addition, it has application for enhancing other SLR systems, such as home device activations using context-aware wearables." 

Dr. Roozbeh Jafari
Associate Professor and Affiliated Faculty, Center for Remote Health Technologies and Systems
Biomedical Engineering, Computer Science and Engineering; Electrical and Computer Engineering