Skip To Main Content
Dr. Abdullah Muzahid
Dr. Abdullah Muzahid’s research focuses on improving computer learning and decision-making through new hardware developments. Muzahid’s grant-funded research and peer-reviewed publications are changing the computer learning process. | Image: Texas A&M Engineering

Dr. Abdullah Muzahid, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, is improving machine learning through research into the speed, efficiency and trustworthiness of the process. He recently received a grant from the National Science Foundation to continue his work on improving the robustness and reliability of deep neural networks. Muzahid also published an article about optimizing the training process of machine-learning applications at the Institute of Electrical and Electronics Engineers’ 29th International Symposium on High-Performance Computer Architecture (HPCA-29).

Muzahid’s research focuses on how computer hardware impacts machine learning. Machine learning is a subcategory of artificial intelligence that centers on using computer systems to learn and adapt without specific instructions from the user. There are two types of machine-learning operations: inference and training.

“Inference is when the machine tries to predict an outcome,” Muzahid said. “Training is when the computer learns by feeding a large amount of data through an algorithm. We are working to improve both the speed and reliability of computer training.”

Funding to research the reliability of computer learning

Muzahid was awarded a grant from the National Science Foundation to research the robustness of deep neural networks (DNN) and improve the reliability of machine decision-making. DNNs are complex computer algorithms that are modeled after the functioning of the human brain and aid in the computer learning process. When machine learning and decision-making are trustworthy, users can rely on the machine’s decisions without hesitation.

“It’s very important that machine learning and decision-making are accurate,” Muzahid said. “One example is an autonomous car driving itself down the road. It needs to be able to learn what is in front of it and react appropriately. Unfortunately, machine learning will not be accurate 100% of the time because there is no way to prepare a machine for every possible situation using an infinite number of variables.”

Using the funding from the grant, Muzahid and his team hope to create a method for a machine to check the reliability of its learning. If the reliability is low, the device will alert the user so that the user can confirm or correct the machine’s decisions. 

“An autonomous vehicle is trained to recognize road signs and adjust its driving patterns,” he said. “What happens if the vehicle’s sensors are covered with rain or the signs are partially covered with snow? The hardware we hope to create will be able to recognize that the reliability of the machine learning is low and will alert the operator to prevent a catastrophe.”

Muzahid’s research will be conducted in the Programming and Architecture Lab and will focus broadly on improving the reliability of machine learning before being applied to devices that impact human lives. He also plans to create hardware that can assist with model debugging and determine when a model is sufficiently trained.

“Improving the trustworthiness of machine learning is vitally important,” Muzahid said. “The application of the hardware we hope to develop could be widespread and increase the reliability of machine learning and decision-making in the future.”

The Programming and Architecture Lab is part of the Texas A&M Engineering Experiment Station, a state agency that solves problems through applied research, development and collaboration with industry, government and academic partners.

Doubling the speed of computer learning

In February, Muzahid presented his research paper titled “Mercury: Accelerating DNN Training by Exploiting Input Similarity” at HPCA-29 in Montreal, Canada.

Machine learning is time and energy intensive because of the complex calculations a computer must complete. Muzahid has developed a method that doubles the speed and efficiency of the process by enabling computer algorithms to use the same computations for similar pieces of data.

“Machine training is very computationally intensive because the computer must do millions of computations for each piece of input data,” Muzahid said. “By allowing the computer to use a previous computation for similar input data, we can save time and energy.” 

During machine learning, a large amount of the input data is similar to other sections of input data, such as the background of a picture. Until Muzahid’s method, computers were required to complete a separate computation for each individual piece of data, even if it was the same as previous data.

“If we use a picture of a bird in the sky as input data, the vast majority of the picture background will be blank sky,” Muzahid said. “After the machine does its computation for the first portion of sky, it is able to recognize the similarity of the remaining sections of the sky and use the same computation.”

Muzahid’s research shows that enabling machines to use the same computation for similar input data saves a significant amount of time.

“Our models have cut training time in half,” Muzahid said. “It’s a huge improvement over previous methods.”

This research was conducted with the assistance of the following students in the Department of Computer Science and Engineering: graduate students Vahid Janfaza, Kevin Weston, Moein Razavi Ghods, Shantanu Mandal, Farabi Mahmud and undergraduate student Alexander Hilty.