Skip To Main Content
Dr. Chaspari and her team of students
Dr. Chaspari has received a 2019 PESCA grant to enhance emotional well-being technology | Image: Justin Baetge/ Texas A&M Engineering Communications

From Fitbits to mobile meditation apps, personal health has taken technology by storm.

While many programs use inputs such as heart rate or calorie intake to track the condition of a user, a person’s voice can speak volumes about their well-being.

Utilizing the insightful nature of speech and vocal patterns, Dr. Theodora Chaspari is working to develop a digital program that monitors and tracks a user’s emotional state while keeping their identity completely anonymous. Her research has the potential to transform everyday devices into valuable assets for psychological healthcare and future research.

Chaspari is an assistant professor in the Department of Computer Science and Engineering at Texas A&M University. Her project is funded by a 2019 PESCA grant provided by the university’s Division of Research.

Similar to the way a Fitbit records activity to track a person’s physical fitness, many voice-compatible accessories, such as cellphones and Amazon’s Echo, are able to record speech to track and monitor emotional wellness. This is done through the use of coded acoustic markers that decipher and translate vocal patterns, allowing digital devices to draw conclusions about the mood of a speaker.

While voice-compatible and voice-activated technology is both prominent and convenient, it also runs the risk of privacy invasion.

As Chaspari explained, everyone’s voice contains unique attributes, including timber and pitch, that reveal such things as age and gender. This information can then be used to identify a speaker, which is especially concerning when personal health information is involved.

“What we want to do is make ordinary devices be able to understand emotion, while erasing any information related to the identity of the speaker,” said Chaspari. “So we are transforming the speech signal and deriving measures that are emotion-dependent, behavior-dependent, but not identity-dependent.”

By removing the link between voice and speaker to provide anonymity to users, Chaspari’s emotion monitoring program will be a vital stepping stone in the development of emotional healthcare and research in a digital world.

Whether a tool to help psychiatrists track grief or a means for parents to keep an eye on the emotional development of their child, the voice has a lot to say about the future of personal well-being.

“There have been a lot of studies about how tracking the speech and vocal patterns of people with depression can help understand its progression and recognize problematic episodes,” said Chaspari. “Now that we have technology that can comprehend and record data from people's lives, the possibilities are endless.”