Skip To Main Content
Dr. Theodora Chaspari smiling at camera.
When combined with artificial intelligence, human speech can serve as a valuable biomarker to diagnose and track the outcomes of mental health conditions such as depression, anxiety disorder or post-traumatic stress disorder. | Image: Texas AM Engineering

Dr. Theodora Chaspari, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, is working to leverage speech-based technologies and make them more reliable, trustworthy and accountable.

Chaspari recently received the National Science Foundation's (NSF) Faculty Early Career Development (CAREER) Award for her research project titled "Enabling Trustworthy Speech Technologies for Mental Health Care: From Speech Anonymization to Fair Human-centered Machine Intelligence." The NSF CAREER award is the most prestigious recognition given by the NSF to support early-career faculty.

She will use the award to design reliable artificial intelligence (AI) algorithms for the speech-based diagnosis and monitoring of mental health conditions to address the three pillars of trust: explainability, privacy and fair decision-making.

When combined with AI, human speech can serve as a valuable biomarker, a measurable indicator of some biological state or condition. Biomarkers can be used to both diagnose and track the outcomes of mental health conditions such as depression, anxiety disorder or post-traumatic stress disorder. By monitoring a person's speech, doctors and researchers will understand their mental health better and precisely predict degradations in the tone of their voice. For example, for people that have been diagnosed with depression, their speech is characterized as flat and monotone.

However, the continuous tracking of a person's speech in real time comes with several societal and ethical challenges.

"If we can specifically predict the degradations in their tone, then we should be able to intervene and prevent a relapse of the symptoms from occurring," said Chaspari. "We are developing AI that is not only reliable in terms of precise monitoring but also more human-centered and friendly."

The project contains three objectives, and the first challenge the team is addressing is privacy. To ensure that someone's speech data doesn't end up in the wrong hands, Chaspari and her team propose new algorithms to de-identify and transform speech signals. Hence, there is no information about a person's identity within the data.

Another big part of the project aims to understand how clinicians understand and interact with the algorithms as part of their decision-making process when diagnosing or treating a patient. Chaspari is also interested in how much the clinician trusts or mistrusts the AI, how this evolves and how their interest in it potentially depends on personality characteristics.

"If you're more open to new experiences, then you might show more of an interest in the AI, and you would like to try it. But this might not be the case for people who have a less open personality," she said.

Funds for this research program will go toward training underserved high school and college students and provide knowledge about ethically applying computing research in sensitive populations. The applications developed due to this research will also serve as a vehicle to encourage students to pursue careers in the STEM field.