The Past, and Future of Motion Capture: Where it Came From and What's Coming Next

By Tom Pritchard on at

Over the past decade and a half, motion capture has become a staple part of any visual effects studio, whether they're working on films, TV, or video games. And with good reason, seeing as how recording real people can be invaluable in producing more natural and organic CGI.

But mo-cap is one of those things you don't hear a great deal about — certainly not unless you watch behind-the-scenes extras on DVDs. Even then, we're only hearing about what's going on on that particular set, rather than how filmmakers got there and where they're heading in the future.

To get a better idea about where motion capture came from and what's going to be happening in the future I sat down with Brett Ineson, CTO and President of performance capture company Animatrik. He's worked in the effects business for over 20 years, with 15 of those focusing on performance capture. In that time he's worked on Return of the KingWarcraft, District 9, Gears of War 4, and the upcoming Justice League just to name a few.

We spoke about the origins of motion capture as a medical tool, about reviving actors in Rogue One, the concept of 'cheap' motion capture being basically non-existent, what effect the iPhone X's facial recognition sensors might have, how well we're navigating the uncanny valley, and more.


Giz: Could you just explain briefly the history of mo-cap? How it broke into the mainstream and where we are now?

Brett: Sure. So the actual origins of motion capture are not in the entertainment business, it’s on the medical side actually.

Doctors need to work out how to work with children with cerebral palsy, and what they wanted to do is do surgery on them to extend their limbs. In order to do that they had to measure their gait and how they walk, so they could use a computer model to see the adjustments they were doing - how they would affect their gait. Then they’d do the surgery, motion-capture them again and see how it actually worked. And they’d do a series of that over time. That was the origin of motion capture.

Pretty quickly the entertainment industry saw what they were doing, and of course animators, once 3D animation came along, were all working with the CG version of a skeleton. It wasn’t hard to come up with the idea that we could use that same technology to drive a character skeleton. And I believe one of the very first popular uses of it was a Michael Jackson video with the dancing skeleton [HIStory/Ghosts - ed]. I don’t believe that was the first, but it was one of the first ones that the general public really knew about.


"I think that was the first character using motion capture the world saw that was really believable."

On Gollum


Giz: One of the big, first things people seem to think about with mo-cap was Gollum in Lord of the Rings. Was it in fairly common use in the film industry before that or is it pretty much something that’s picked up since then?

Brett: You know, it was starting to be quite common before Lord of the Rings. But I would say that the final product that was being used quite a bit was more on a cartoony side.

Most of the companies out there hadn’t really mastered the use of it yet, although they were getting quite good. But it really was the wider team and with Gollum that made all the pieces fit and created an amazingly believable character. So you know a lot went into that.

One, obviously, there’s good acting. There’s total commitment by the wider producers that they were going to make use of that acting and so there’s a whole bunch of people working pretty hard at developing the tools to make that work. I worked on the motion edit team working on the Gollum shots for the Return of the King movie, and so I got quite used to the tools that they were building. And you know I think that was the first character using motion capture the world saw that was really believable.

Giz: What sort of challenges were there back then? You know, before then, since then, just in terms of adoption, cost of technology, things like that?

Brett: So I guess back then, the challenges were kind of varied. There was the camera technology that was in its infancy. So during the Lord of the Ring days the cameras that we used were like a Pulnix security camera. Not like today where we have such a variety of incredible camera technology and sensors to choose from. Back then it was quite limited.

The software tools were being created while that movie was being made, in a sense. Well, I guess it would have started with Lord of the Rings 1. So, some of the tools that the animators needed to do their job just weren’t available. They were being built on the fly and really realised during the course of those three films. At least on the sort of giant tool side, there were other companies around the world working in parallel on their version.

And one of the biggest challenges honestly was a bit of a social one, where the animators weren’t really sure what to make of this technology. At that time they saw it as competition to their livelihood. It was met with quite a resistance. Not by all of course; I was a keyframe animator and that was my career as well, but for whatever reason I really took to it and loved it and just saw it as another way to do animating characters.

Giz: Are they a bit more welcoming of it these days?

Brett: These days they are, yeah. You know you’ll still find hold-outs out there in the industry, there’s always going to be people who treat it like a bit of a religion. But I think Gollum was... it was hard to argue with Gollum. And of course there’s tonnes of keyframe animation, a whole lot of keyframe animation of Gollum in those films too. There’s incredibly talented animators doing shots, but a lot was mo-capped of course. And you know it was hard to argue with the quality of that. People felt Gollum was real and if you compare it — I don’t want to call out other characters that were done at the same time — but the other characters done at the same time felt more like cartoon inserts into the film rather than something that’s alive.


"The animators weren’t really sure what to make of this technology. At that time they saw it as competition to their livelihood."

On motion capture's early challenges


Giz: Obviously it’s got a lot cheaper to do these days, how is it in terms of cost against more traditional effects and prosthetics?

Brett: Well, I think motion capture being cheap is a misnomer. You can buy lower cost systems, the barrier to entry is lower, the manufacturers have done a great job at creating tools that non-experts can use. But there’s still quite a lot that goes into the triple A quality product so at that level it can still be quite expensive, to be honest.

And I guess what it comes down to is that you can do it cheap but you might have problems in post and in the back end and that starts to get really expensive, so it depends whose budget you’re looking at. It might look cheap but in the end, it costs more. So if you spend a little money to do it correctly the first time, what you end up with is efficiency and you get the product quicker and you get to create your edit quicker. And there you can see cost savings realised. But if you try and cut corners doing that, you know the savings don’t come.

That was a little long-winded but hopefully it made sense!

Giz: I noticed on your company website that you did some work on Rogue One. Can you tell me a little bit more about that?

Brett: So Rogue One we had a small role on that, but we can't fully comment on the work at this time.

Giz: How do you feel about using the tech for stuff like that though; bringing back actors that have either completely changed or are long gone?

Brett: I love it, to be honest. You know, I think it’s a fascinating use of the technology and you know, the way we’re using it right now is not actually right. It’s out of need or desperation but by planning ahead, we as an industry can do that much, much better. I wouldn’t have insight into what the cost of that sequence was but my guess is it was very expensive. This comes back into the ability to plan ahead, and of course you can’t plan ahead when someone’s passed. So I think we should all be taking note on that for the future.


"If you spend a little money to do it correctly the first time, what you end up with is efficiency and you get the product quicker and you get to create your edit quicker."


Giz: Did you do any work on Green Lantern? I couldn’t see anything

Brett: No. no.

Giz: What do you think could be done to avoid the mistakes that film made in terms of the mo-cap and the suits and everything on that line?

Brett: You know, I didn’t see it. Can you give me a little background and maybe I can comment on it?

Giz: Well, I mean, the entire suit is fully CG. Ryan Reynolds wore a mo-cap suit on set, which they then added the hero costume on top of. When you look at the final product, it’s as you said before, it’s quite cartoony. It just doesn’t fit, it looks fake, and the effects in general in that film were quite poor.

Brett: Ah, okay. I think I need to see this movie now — you’ve piqued my interest.

Giz: It’s not a good movie, just to warn you.

Brett: I guess that’s why I haven’t seen it. Well, that explains why they didn’t want to do any on-set mo-cap for Deadpool. But my guess is it was probably due to budget and, you know, maybe not giving the right people the right amount of money to do it correctly. In the industry there are people who tell you they can do it and it’s easy — but they can’t.

Giz: I did hear at the time I think they pumped a bunch of money into it in post-production to try and sort the effects out. I guess that relates to what you said before about how that’s not the most efficient way of doing it.

Brett: Yeah, and you know, that’s the historical classic issue: the pre-production/post-production VFX budgets are all separate budgets run by separate producers and it created those type of movies really. What you see now, there’s much more cohesion, especially with companies like Marvel and things like that — you know they’re very active and present in how these things get made and you can’t just have producers just, you know, going on their own and saying “oh that’s not in my budget, that’s somebody else’s problem”. They’re doing a great job at making this all together now and you can really see it in the quality of the films.


"It was hard to argue with Gollum."


Giz: How far through the uncanny valley do you think we are at the moment?

Brett: I think we’re probably 70%, I would guess. As soon as you add even a little bit of style to something, I’d say we’re 100%. You know, if we look at what James Cameron’s done with Avatar, most people were able to watch that and just believe the characters were real.

And they were pretty much humanoid. You just add a bit of blue and a little bit of different shape to the characters and that’s that. I’d call that 100% there. If you want to do a true-true digital double, I think we have a couple of examples that you mentioned in Star Wars, where short sequences are 95% I suppose. But a whole movie — I don’t think you would hit that. I think you’d be at 70%, so I think people are using it sparingly but it’s coming.

Giz: How long do you think it’ll actually be before they can hit that in the whole film?

Brett: Oh I think not even two years probably. Yeah.

Giz: And in terms of going back to the motion capture tech, people tend to have this idea of the actor in a suit, covered in balls with the dot make-up on the face. Is it likely that the tech is going to improve to the point where they can get beyond using that, or is that always going to be one of the things that just has to be done?

Brett: Well, we already have technology where we can do without the markers. The problem is efficiency and things like storage. Without markers, you’re essentially collecting data on everything you see, so in order to store that data somewhere you’re going to need an Amazon computer farm.

Still, using markers is the most efficient way. One thing is we can do it in real time. We’re not storing tremendous amounts of data so all that’s shipped, like in a motion capture camera. It’s essentially a small computer, so it’s looking for those markers and when it finds them it does what we refer to as a 'central calculation', which is just a little map to find the centre of the marker. And then the camera will just ship the actual eye coordinate on the sensor where it saw that marker. It’s very lightweight - we can shoot ten people all day long and walk away with that data on a small drive.

So even though some of these more exciting technologies exist, it’s just not manageable in a production, you know. You might be able to do it for a single technology preview or something small, but not on a big film or a large video game. But that being said, we are capturing the surface of the face and that’s in more of a controlled situation, and in that case you’re only using two cameras not eighty cameras. So it’s less data to store. But we are doing that already.

Giz: And what about multi-spectrum cameras like the facial recognition sensors in the new iPhone? Is that something that could be integrated or is that just completely different?

B: Well yeah, it could. Certainly for like a preview or a pre-vis. That would be really great because right now when you shoot performance capture of an actor, we have real-time feedback on the character, of the body. But the face, real-time preview of the face is just starting to emerge. There’s a couple of companies that are doing it quite well, but having this technology in phones is going to bring that to a lot more people and I can see a lot of entertainment products being made out of that. You yourself will be able to play Gollum.

Giz: I don’t think I’d want to play Gollum, I couldn’t do the voice.

Brett: [laughter] Yeah.

Giz: Looking ahead now in the short and the long term, what advancements do you think will be made with mo-cap tech?

Brett: Let me think here. Well, certainly the real-time phase is coming. I do believe that we will get to the point where we will go without using markers. You know with the speed at which storage is getting faster and cheaper, it’s quite exciting.

Two years ago I would have said, yeah that’s not going to happen, but even what’s happened with storage in the last few years, it’s quite remarkable. And the camera sensors are getting more powerful and also more cost-effective as well. There can be a point where we are recording 3D surface data on everything we see at all times. So on a set there’s a series of cameras, you turn them on and they’re just capturing the entire 3D volume as it unfolds.


"Having this technology in phones is going to bring that to a lot more people and I can see a lot of entertainment products being made out of that. You yourself will be able to play Gollum."

On how multi spectral sensors, like the iPhone X, might affect the industry


Giz: In terms of when you’re transferring that to the final result, would that make it look more realistic than it does? I’m specifically thinking about video games now. Could you get a video game character that looks like a real person?

Brett: Most definitely, yeah. And you know, I think we’re already there where you can have a video game character that looks like a real person. I think the barrier at this moment is how much can you render on a consumer console in real time. There are still limits there. But right now these game companies could spend a lot of time and create something as perfectly believable, and if there was a machine that would render it at someone’s house it would be done.

Giz: I think that’s everything I needed to ask. Is there anything you want to add on what you’ve said already?

Brett: Well I guess just in the last question since it’s still on my mind. You know the game engines have powerful renderers, but there are some things they’re not doing yet that you kind of need for that uncanny valley. It’s things like the sub-surface scattering, like blood flow beneath the skin, things like that. So I think the industry knows what they have to do; they know what looks real in an offline render and it’s just got to catch up on the real time.

Giz: Are you allowed to talk about any of the films you’re working on at the moment that haven’t come out yet?

Brett: Erm... I guess the short answer is no. Some of them might give permission so I guess we would have to chat with the [PR firm] guys if they wanted to reach out to the films. We’re currently working on X-Men, Deadpool 2, we’ve finished Justice League... there’s a few, but I guess those are the ones that stand out.

Giz: I guess since Justice League is coming out soon, how was it working on that? I mean there were a lot of CG characters in there from...

Brett: Yeah, it was great. Actually, that movie we shot a little different in that we were brought in once they sort of had an edit. And there were some characters that they just knew they didn’t want to do live in principal photography. They wanted to do it after the fact so they knew exactly what they needed to get.


"We already have technology where we can do without the markers. The problem is efficiency and things like storage."


So we did motion capture and we did face surface capture, with our partner DI4D on that one. And it was really great. It was a really intimate experience because it was performance capture only — so a much smaller crew and a director, DOP, digital effects supervisor, just collaborating closely with the actors to get very specific shots. It was quite fun.


Goodbyes are awkward, so I'm cutting this off here. I hope everyone found this as fascinating as I did.