Skip To Main Content
A digitally rendered image of an arm wearing a smartwatch with manufacturing equipment in the background.
Image: Texas A&M Engineering

One concern about AI is that it can imitate voices, a practice called voice spoofing. Deepfakes are a prominent type of voice spoofing facilitated by deep learning, a process in which AI modeled after neural networks in the human brain analyzes data and draws connections. Criminals can use voice spoofing to impersonate someone’s family member and ask for money. Voice spoofing is also a concern in manufacturing settings where verbal commands are widely used to control machinery. Criminals can engineer voices to manipulate and disrupt the machinery or steal data. 

Machines and manufacturing personnel need a way to verify that verbal commands are legitimate. Such protections do exist but cannot stop skilled attacks like deepfakes, requiring additional safeguards. 

Dr. Nitesh Saxena, a professor in the Department of Computer Science and Engineering at Texas A&M University, and Yingying (Jennifer) Chen, professor and department chair of Electrical and Computer Engineering at Rutgers University, are working to protect voice control technology against manipulation. Smartwatches may be part of their solution.

Improving the security of voice authentication is critical in many applications, especially in manufacturing domains.

Dr. Nitesh Saxena

Wearable devices like smartwatches are already equipped with microphones and accelerometers, which measure vibrations. Saxena and Chen plan to develop software that can be installed on smartwatches or other wearable devices to help authenticate commands in manufacturing settings. 

Voice control technologies are susceptible to attack because existing defenses only assess a voice sample’s acoustic properties. Saxena and Chen hope to enhance security by requiring verification of both the acoustic properties and the vibration signals of voice samples. They believe this will improve security because it will be harder for criminals to fake both measures simultaneously. 

“Improving the security of voice authentication is critical in many applications, especially in manufacturing domains. Thanks to the support from our sponsor, this project will allow us to explore this important line of research,” Saxena said. His lab has pioneered work in voice-based security. 

This research is funded by MxD (Manufacturing x Digital), an organization that partners with the Department of Defense to strengthen U.S. manufacturing.