Walking Robots

By Lesley Kriewald

Building two-legged robots that can walk—like humans

Science fiction has long promised two-legged, humanlike robots that can walk among us, but those are not yet reality. Self-described sci-fi buff Aaron Ames, assistant professor in the Department of Mechanical Engineering and the Department of Electrical and Computer Engineering, is building the robots himself.

The goal of this research laboratory is trying to get humanlike, robotic walking. What does that mean? 

Humans are incredibly efficient at negotiating diverse terrain, and robots don't have that capacity yet, especially two-legged robots. There are a million different robots being built all over the world, but they all walk only on flat ground with really stiff gaits. 

We are trying to discover what humans do when they walk, to get that essential information needed to get efficient robotic walking. And not only efficient but also extensible, so we can eventually do things such as climb stairs. 

In the end, what we want is a robot that can go into a disaster area—say, a rubble-strewn area—and get through there in a way that wheeled robots can't. Walking robots have the potential to surpass wheeled robots, but walking robots require a lot of sophistication that's just not yet present. 

Walking Robots: AMBER 2.0

AMBER 2.0, the second robot Ames and students have built in the A&M Bipedal Experimental Robotics (AMBER) Lab, exhibits more advanced walking behaviors than its predecessor, AMBER 1.0.

So how are you studying human walking? How do you find that essential information about human movement that you are applying to robots?

Motion capture is actually how all this got started. In fact, it got started about two years ago with an experiment at Berkeley where we did some motion capture of human walking.

We're really coming up with a formal description of walking. By looking at the data, we found that humans do very interesting things when they walk, and in fact humans display this very simple behavior in the end. After some data analysis, we found that humans basically act like mass-spring damper systems, the simplest example of a mechanical system. Despite all this complexity going on, humans fall into this simple behavior. 

That's a good thing, because there's a sort of hierarchical control in humans where, when we learn to do something very well, we basically move all that processing down to pattern generators in our spinal cords. So we don't really need our brains to walk, which is why we can still walk if we're texting or reading or doing something else. You don't need vision or anything. What do you need all that cognition for? You need it to switch between different behaviors. If you don't see steps coming and you take a step off a step, you have that surprised feeling of falling when, suddenly, your body realizes it's not doing what it's meant to be doing.

So we have to look at human data. But robots are so finicky, so if we took that human data and just put it into these robotic legs, it wouldn't work because these legs aren't in humanlike proportions. The robot legs would maybe move through the oscillations in the right way, but the robot wouldn't actually be able to sustain walking. So we look to human data and process that data to extract the essential information about walking. Once we have that information, we encode that in our robots. 

If you want a robot to do any human behavior, all you have to do is watch a human do it, apply this algorithm to get that, and the robot does it. So that's where humans play a role; they're directly involved in this.

All our robots use to get this walking is human data. That's the only input to the system. So if you want a robot to do any human behavior, all you have to do is watch a human do it, apply this algorithm to get that, and the robot does it. And we've done this with stair climbing now in simulation, going up and down stairs, walking on flat ground, stopping, and we're going to start doing things like running and turning. We have started to build up this template of behaviors. 

So that's where humans play a role; they're directly involved in this. 

What kinds of robots are you using in your lab?

We've built our own robots here, really just pairs of legs. What's interesting about our first robot, AMBER 1.0 (named for our lab, the A&M Bipedal Experimental Robotics Lab), is that the motor being used to drive this robot is about the size of a C battery. Imagine how efficient your walk would have to be if your muscles were the size of a battery. Rather than overdesign this thing with huge motors, we're trying to get very efficient walking. That makes things tougher because when there are uneven surfaces, or when things don't happen exactly the way you think they will, then the robot will fall. 

And because it's not the most human-looking rig in the world, we also bought a NAO robot. This is a total humanoid robot, and we're applying the same kind of methods on this to try to achieve walking. It's a little less fun in some ways—it's a closed platform, and we can't build it from scratch because we don't know every component in it—but it is an easy way to try out some things really quickly on many more degrees of freedom. 

Walking Robots: AMBER 1.0Walking Robots: AMBER 1.0

The researchers from the A&M Bipedal Experimental Robotics (AMBER) Lab evaluate AMBER 1.0’s motion on a computer while the robotic legs “walk” on a treadmill.

I have some research collaborations with NASA in Houston; they're actually building a humanoid robot, an actual full-fledged biped. We will have to come in without having designed or built the robot and apply all our methods to it, so playing with this NAO platform that was designed by somebody else gives us a feeling as to how we could do that with NASA's robot. 

You mentioned collaborating with NASA. Is space travel or exploration a possible application for walking robots?

Yes! A very cool possible application for bipedal robots is space exploration. I don't think we're going back to Mars anytime soon as humans, but we are sending robots there. What ended up making Spirit and Opportunity and other space robots and rovers fail is that they can't get around very well. They got stuck in the soil and had trouble moving around. The Curiosity rover that landed on Mars in August promises to have more mobility than previous rovers, but it's fundamentally limited because it is wheeled.

Legs have the potential to get the scientists to the places where they can make the discoveries they really want to make—going through the polar ice caps or going down into craters. There's a huge potential there.

So the advantage of robots on legs over wheels is clear. What's the advantage of having robots on two legs instead of four (or even more)?

Bipeds do allow you to go places even the quadrupeds can't. But there's a more important reason than that: If we really wanted this technology in terms of just robots, maybe four legs would be okay and four-leg robots would be built. But understanding human motion, human-leg motion, means that we can apply these ideas to places where humans and robots interact. 

So another big application area that we're looking at is prosthetic design. If you understand exactly how a human walks, and you can realize that in robotic motion in a very precise fashion, then you can imagine building prosthetic devices that allow humans to do all those things. Especially for double amputees, let's say, who have huge mobility issues. Even with the current state of prosthetics, double amputees have to use canes to get around, and it's very difficult. 

It's amazing because double amputees really want to try out these things because it's so hard for them to do anything in their everyday lives. So we can take care of two birds with one stone: We're building robots that can do amazing things, and that gives us the potential to build prosthetic devices such as exoskeletons, rehabilitation devices and orthotics—there's a whole wealth of applications if you understand how bipeds locomote. 

So these robotic legs walking in place on a treadmill are the simplest model to understand before we move to robotic legs that actually can move forward. The elemental unit here is human data, so it is not surprising that this movement is fairly human looking. We are trying to merge human data, and that actually feeds into the prosthetics and everything else because you can look at the behavior of a healthy human leg and try to mimic that behavior directly through these formal methods. 

So what brought you to robots?

Science fiction. I'm a huge sci-fi buff. I probably have thousands of sci-fi books on my shelves, and I've just been fascinated by making robots that can do what humans do. It's funny: Even in my application to graduate school, I said I wanted to work on bipedal robots. And then in graduate school, I studied formal math and theory so that I was finally able to take all those tools and come back to robotic walking again the way I really wanted to do it. 

And what I always wanted to do was really understand it. I don't want to just make it walk. I want to know exactly why it's walking. What are those fundamental mechanisms that humans use? And the amazing thing is that humans do something that is incredibly simple. We discovered recently that humans basically act like masses and springs. That's what we do. Despite all these complexities that we have, when we settle into something very simple, we display this elementary behavior. And it's that understanding that allows us to understand this robotic walking. We can use that and basically make the robot do the same thing, and turn it into thousands of lines of code, and then the robot starts to walk. You can see it on the knee behavior of these legs: There's a vibration after the foot strikes the ground. The "knee" is absorbing that impact that occurs with every step. 

Walking really is this delicate balance, too—being on the verge of being unstable. That's what makes it really hard. Walking can be described as controlled falling. Humans sort of throw their legs forward and just know that they're going to catch themselves. That's what makes it so difficult to do on robots. 

We're building robots that can do amazing things, and that gives us the potential to build prosthetic devices such as exoskeletons, rehabilitation devices, and orthotics—there's a whole wealth of applications if you understand how bipeds locomote.

What are some of the hurdles to traversing flat ground or going over rocky surfaces? Where is your research going next?

Our first goal is to start to get other behaviors like walking up and down stairs. There's a difference between steady-state human behaviors—things that we do on a regular basis, such as climbing stairs—and walking over rocks. 

AMBER 2.0, our second robot, walks off the treadmill and does more interesting walking behaviors, moving forward as it walks in a circle. We'll determine how to extrapolate all this knowledge to get to things like stepping over stones and walking on uneven terrain. And then you need a visual processing system in the robot. 

So there's a whole process going on here. But eventually, I think all that will allow us to understand exactly how to navigate any environment by essentially extrapolating between all these behaviors. Yes, there are huge challenges, but extrapolation is what we're counting on. Once we understand all these basic things, then we can potentially look at a human going through a rubble-strewn area and how a human handles an uneven area, and then get the robot to operate the same way. 

And then the robots will be able to switch between all these behaviors, and suddenly you're much closer to having robots able to do these things that science fiction promised me when I was growing up. That's the goal. And as far as I know, no other methods used today really have that extensibility. I think it's fair to say that this is the first one that's really able to do that.

Dr. Aaron Ames
Dr. Aaron Ames
Assistant Professor
Mechanical Engineering | Electrical & Computer Engineering
979.845.5243