Sungtae Shin, a doctoral student in Texas A&M University’s Department of Mechanical Engineering, has been researching myoelectric interfacing, reading the electrical signals from muscles, as part of a project designed to create exoskeletons for physical rehabilitation purposes. In an important step toward that goal, Shin has successfully used a myoelectric controller worn like an armband to operate a robotic arm.
“People have seen automatic prosthetics with basic static motion, like hands closing or hands opening, and that’s a good first stage, but that’s not good enough for typical human activity,” Shin said.
Dr. Reza Langari, the head of the Laboratory for Control, Robotics and Animation (LCRA) which is developing the exoskeleton and head of the Department of Engineering Technology and Industrial Distribution, explained the overall goal for the project is for the system to be able to figure out what a person is trying to do and assist where needed.
“You would be able to use the myoelectric signal to detect what the intended muscle activity is,” Langari said. “Even if the person cannot necessarily achieve that, you still extract the information and assist the person in moving.”
Though the full implementation into a working exoskeleton is farther down the road, Langari believes that people will be able to see the potential for the technology in the early success Shin has had.
“Even if the robot is not a true exoskeleton and it isn’t attached to the person’s arm, it can still show that we are successful in interpreting the human motion intent and activating an electromechanical device to operate on that basis,” Langari said.
The concept of using myoelectric signals to interface is simple enough. The armband reads the electrical signals the brain sends to the muscles to figure out what it is that the muscles are trying to do. However, developing a system capable of doing that is an extremely complicated process.
“The electromyographic signal is, if you look at it, just a huge mess,” said Langari. “It just looks like a bunch of noise.” He compared it to standing in the middle of a loud factory with a supervisor giving you instructions nearby, but in a language you don’t understand.
“It’s taken Sungtae almost two years to get to the point where he can command the robot relatively smoothly,” Langari said. “That’s a fairly significant achievement. Nobody else has been able to do that at this point. You have to really use some sophisticated noise-cancelling algorithms to eliminate all the unwanted information. How you decipher what they say, even after you have eliminated all the background noise, is not trivial. How do you make sure that what you’re eliminating is noise and not important information?”
Shin did his undergraduate work at Sejong University in Seoul in his home country of South Korea. It was there that he first learned about Texas A&M from some of the faculty who are former students. He also spent some time at the Korean Institute of Science and Technology, and a year at Texas A&M University at Qatar before coming to College Station.
Now, two years into the project, Shin believes the potential for the technology extends far beyond the original idea. He sees the armband-like accessory as a potential input device for a number of systems, and says that while there has been significant research into virtual and augmented reality systems, not as much work has been done on input devices for those systems.
“I think in five or 10 years this type of technology will be really big,” Shin said. “I think virtual reality and augmented reality needs an interface device like dynamic motion. This system could be used instead of a computer mouse or a controller.”
A system that tracks dynamic motion has several advantages over many input systems most commonly used now. They are smoother and more precise than commonly used controllers, and have wider capabilities than typed commands. Some dynamic motion technologies are currently on the market, but they rely on visual tracking using cameras, like an XBox Kinect. A myoelectric system would eliminate some of the challenges that those systems have.
“A myoelectric interface comes from the gesture itself,” Shin said. “All gestures involve muscle activity. So if we can catch this signal and recognize this pattern, it’s more valuable than using a vision system. Vision systems have obstacles. You have to use a terminal, the lighting can make a difference, the size of the hands matters, the point of view of the camera matters…but with myoelectric signal you don’t have those limits. Gesturing is universal.”
In addition to being used for input devices and human assistance, Langari also sees potential in dynamic motion being used to teach new tasks to robots.
“If you teach a child to do something, you don’t just tell them how to do it, you also actually show them how to do it,” he said. “We want to have the capability for a robot to do that. In factories you might want to show a robot how to do new tasks that have been done by humans. To translate what the human does, you first have to understand what the human is doing with their arms and hands and then program the robot to do it. This way, we’re going directly from human physical motion to robotic physical motion.”
Note: As Shin is developing the control side, other members of the LCRA's exoskeleton project are working on the exoskeleton itself. Click here to read a story about their work which was featured recently in The Conversation.