Google's Design Mastermind Explains the Future of Android

By Brent Rose on at

Two years ago, as Google first showed off Android Jelly Bean, we sat down with then-Director of Android User Experience Matias Duarte to discuss where the operating system was heading. Fast forward to this week's Google I/O, where Duarte—now Google's Vice President of Design—introduced Material Design. We had the chance, once again, to ask him about Android's latest design gambit, and what it means for Google's future.

Material Design is a bold move. It lays out a UI framework for an entire ecology of devices that Google is developing, from watches to cars. But it also feels so fundamentally grounded in logic and common sense that you wonder why users and designers haven't been demanding Material Design all along.

We haven't seen much of the upcoming L-release of Android, which will hit sometime later this year, so some of these concepts seem abstract for now. Fortunately, Matias was able to give us a deep dive on the philosophical underpinnings of Material Design.

Gizmodo: So, what is Material Design and how did you guys start heading down this path?

Matias Duarte: We had a really big problem. It wasn't just a problem about going beyond phones and tablets, which was clearly something we wanted to do—we wanted to design for all these different screen sizes. And it wasn't just the problem of going to multiple platforms and form-factors, right? It's not just Android and web across all these form-factors. And it also wasn't just the problem of "We want a design system that's good for Google." We wanted it to be a design system that anybody can use to really express their brand and their identity and their needs and capabilities.

All of that is just an enormous design space, and it was super exciting to do. I can't recall an opportunity that anybody's had to really work on that big of a design problem. And it was actually awesome as the teams got together to work on this, because we kind of really leaned into it. The more we would get designers together talking about it, the more they would say, "You know what? If we did just a little bit more, we could solve this problem as well."

And so the problem space we were tackling became more and more ambitious just because people got so excited about the possibility of solving all those problems together. And a really wonderful thing would happen where, in the past if you tried to just solve one of those problems by itself in isolation, there would be all of these reasons why it was hard to do and all these things you had to change or overcome. But because everybody was really excited about how ambitious the vision was, it became the opposite. Everyone was just eager to get things up.

It was very, "You guys were trying to avoid this because you thought it would piss us off? No! We're willing to change this; let's do the right thing for everybody together." That was both amazing and absolutely essential to make something of this scope and magnitude happen. And so we wanted to do something that had this kind of scope and magnitude, but we also wanted it to be simple. We wanted it to be exciting, modern, and usable. Really, fundamentally usable.

Google's Design Mastermind Explains the Future of Android

At first, as we were engaging in this problem space, we were talking like designers. "I think things should go this way. I like this attribute." And it was great because we had these different design teams that were working together, and their strengths and different perspectives were pulling in opposite directions and opening everybody's minds up, but it was also like, how the hell do you get everyone together? How do we come together on one idea?

The breakthrough came very literally with this idea of a material metaphor. Of asking the question, "Alright, so, if you were to kind of pull the curtain back and look underneath, what is this thing we're touching?" When you're born your brain is a machine that tries to make sense of the world around it. You're constantly building models of your world. Sometimes they're very sophisticated, intellectual models, but even more fundamentally, very basic, primitive models of like, "Here's a can of Coke. It'll fall if I push it off the edge." Right?

As babies we learn and build models about the physics of the world. And what happens, when we start touching the pixels in software, is that those same parts of your brain are engaged in trying to understand: "What are the physics of this world?"

First off, there's nothing worse than the physics of a world being inconsistent, because it means you're constantly learning—constantly a child and constantly learning because everything is new and a surprise and it's inconsistent and you can never settle down into being efficient and optimising.

GIZ: You have to carefully test everything every time you do something new.

MD: That's right. Cognitive science tells us that the difference between children and adults is that children are constantly in a mode of re-evaluating all of their assumptions. Whereas as adults, their brain has kind of flipped into a mode where they trust their mental model more than their senses, because the models are actually mature enough. That makes them much faster, much more efficient. That's why adults can focus in ways that children can't.

When our worlds, when our pixels that we touch, don't behave like they actually belong to a universe that has consistent rules, we're making everybody back into children all the time. It's very stressful and it takes a lot of mental energy. So we asked ourselves, "Okay, what is it going to be, so we can have a consistent world?" And as we started asking ourselves that question we realised there are attributes to the real world that are actually pretty nice. In fact, the more of the fundamental attributes of the real world that we bring into it, the more we can utilise the fact that you already learned how the real world worked when you were three years old.

This isn't about mimicking or copying the real world for some sake of artifice. This isn't like the fake wood panelling that you put on your station wagon because it recalled the sense of luxury that a wooden carriage would have had. This is about giving the brain the same cues that the real world gives it, in order to make the brain work less.

When we look at a screen, or a book, or a webpage, processing the language, understanding the letter-forms and the words that make up those letter forms, it's very taxing and heavy for the brain. Icons and hieroglyphics, they're a little bit easier because you don't have to parse symbols and glyphs into vowels and syllables, and those then into language. Each glyph has a precise meaning. But still, you're mapping abstract concepts to the world. Whereas processing objects and their relationships happens at a much deeper, much more primitive level. It's happening way back in the back of your brain, directly connected to your eyes. These are mental processes that very primitive animals go through, so it's very efficient, and we wanted to utilise that in the world that we built.

So we wanted to build a world that had a material, and the material was just physical enough to help users, to create hierarchy and to provide affordances. We wanted the world that we built to be continuous and have physics and motion just like the world around us. When we push something it slides and it has momentum then it stops, right? When we drop something it accelerates. If I push, motion radiates outward, when I clap the sound radiates outward. We wanted our world to have continuity of motion as well. Just enough for it make sense. Not, you know, grand zooming motion just for motion's sake. We didn't want you to be flying through the world, we wanted the world to be moving just enough around you for things to make sense.

So all of this together led us to this system that we call Material Design. It was, "Imagine how we would design with the best taste and judgment as designers, if pixels really were malleable and physical and formable. There's value to using surface, in the right way. There's value to using motion, in the right way. In the same way that when we do graphic design and print design we use colour and contrast and alignment in the right, reserved ways.

And there's lot of little cheats in all of these disciplines as well. When we line things up—a circle and a square for instance—you're not actually going to literally line them up. You're going to scoot that circle out a little bit because you want to optically line them up. So with motion as well, sometimes we exaggerate motion a little bit, sometimes we de-emphasise it. With surfaces, too. We want the surfaces to feel rational. We never want things to flip through the screen or flip through the surfaces underneath them. Everything feels like it's more or less within the thickness of the device which you're holding in your hands. If it's on a bigger screen or a TV, maybe you get a little bit more depth that you can play with, but again, we'll "cheat" a little bit. But it's all in the service of creating a system that's optimised for helping your brain do as little work as possible.

Google's Design Mastermind Explains the Future of Android

GIZ: So this is hugely ambitious. Google has its fingers in so many elements of the world now—not just phones and tablets but now watches, smoke alarms, cars, robots. Aside from the standard places we expect (phones, etc), where else might we see Material Design pop up? Do you think we'll see it applied to Nest?

MD: Well, we think Material Design provides a palate that anybody can use, any brand can use, to build the best possible experiences. What you've seen a lot of yesterday [at the I/O keynote], was Google revealing that palate to the world, and showing how we're going to use it, in a very Google-y, opinionated way. With a lot of white space; a lot of bright, optimistic colours; some cheerful, poppy animations; things that are inherently Google-y. Maybe a little more sophisticated, more modern Google-y than the past Googliness, and with a bit more design savvy, but it's very much Google's style.

Different brands and companies, we think, can use those same material building blocks, and they'll use them in different ways. And that's one of the things that was very important to us. Right now Nest is owned by Google, but is operated like a separate company. They have their own brand aesthetic. It's actually fairly similar to Google. I don't know how they're going to use Material, or if they're going to choose to use it. I hope they will, and if they do I'm sure they'll do it in a way that makes sense for their brand.

The ultimate test of the success of Material Design will be precisely that: How do users and developers embrace it? How well do they find that they can express themselves in that style while still utilising those fundamental building blocks that help build consistency for users?

Google's Design Mastermind Explains the Future of Android

GIZ: So, when we last spoke, it was two years ago, and the pendulum had just swung way over to one side of aesthetics. You had just released Jelly Bean for the first time, and flatness was the big thing. It was almost like the slate was being wiped clean. "Get rid of as much as possible and flatten everything," not just from you guys but from other companies as well. And now it seems like you're coming back a little bit in terms of your feelings about depth, weight, and things like that.

MD: Well, we've never been completely flat. One of the things that we were talking about earlier is that what you're seeing here in Material Design is the culmination of a lot of thinking that's been happening across Google. Pieces of it starting in Kennedy, other pieces of it starting in Ice Cream Sandwich, then later coming together a little bit more in Jelly Bean and in Google Now. Now we understand it enough, have systemised it and formalised it enough, that we want to share it with everybody.

We found what was ours and what we think is universal. We've gained enough clarity about that. But it's very much the evolution of consistent thinking, because it's the same people who have been noodling on this problem for a while. Ice Cream Sandwich threw out a lot of excess of artificial surfaces and glossiness, but it was never completely flat. We had an intuition back then that edges and surfaces provided value. In Jelly Bean as well and with Google Now, which famously had the cards which we realised were valuable across a number of Google properties. We were taking a little bit of advantage of that sense of surface and the distinction between other surfaces, though it was usually coplanar surfaces.

We knew those things had value, we just didn't have a formal system for understanding what we were doing. And I think that's one of things that we have to remember, it's that user interface as a discipline, it's like 20 years old! That's like zero time. You look at graphic design, it's like two thousand years old. That's a couple orders of magnitude, right? People have learned a lot.

Everything we're doing right now is just the very early stage of being, making silly and obvious mistakes. When Xerox PARC invented overlapping windows and the mouse and things like that, they were tapping into this idea of surfaces and tangibility, and it provided value. And people didn't necessarily understand what about it was providing value. Some people took it very literally and were like, "The reason desktop metaphors are easy is because they're desktops!," not because people work in certain ways, with several documents open at a time, and having edges to those documents helps you to understand the relationships between them.

So you have people building things like IBM's Real Things project, which had things like a phone that literally looked like a photograph of a phone on your screen and it had a handset with a curly cord, and you click on it, and the handset would flip over and the cord would stretch out. That didn't help anybody, right? It just made everything insanely cumbersome. And in the desktop, maybe we went kind of overboard with some of our stylings of things where they were maybe a little bit tacky. The wood grain on your Atari 2600, right? Maybe it felt cool at the time, but in retrospect you're like, "Hmm, that's not so good."

But we also have to be careful not to throw the baby out with the bathwater. The value of surfaces and edges is huge, because that's the way our brain is built. Our brain isn't going to change. We have to adapt our designs to our subjects, not become in love with an idea or a principle or a purity of something. We have to keep trying things and changing and discover what is going to be the most usable thing. And so surfaces are a useful thing, and what we want to do is provide just enough.

Google's Design Mastermind Explains the Future of Android

GIZ: In terms of animation, what do you use as a guide? It sounds like you're adding a lot of animation back in—which has been largely absent for a while in stock Android. Are you modelling it after real world physics? Is that what you're shooting for?

MD: Mmm-hmm. Yeah, it's kind of the same as the physicality of things. We're trying to figure out what is just enough animation to make things easier for users, not so much that we're just animating for animation's sake. We have this sense that we're holding a thing [i.e. a phone] in our hands. So instead of having this thing be a window that we're diving in and out of, or zooming through a space for, or creating some virtual space that punctures through your hand and can't logically co-exist, instead, let's try to use animation to make sure there are no snap-cuts, no teleportation, no sense of "Wait! Where did everything go? Oh, okay, here I am…" But we want to make all those motions happen within the thickness of the surface that we have.

And so we have this guideline that we use. If we want animation, to help the user focus, to have them have continuity from one scene to another, we want to keep it as simple as possible. So on the big screen, those huge, ripple, touch-feedback effects look enormous. They're really big and they're dramatic and kind of fun. But in reality, in practice when we use them, they're quite subtle. There's a ripple there, but you're almost not realising it consciously. It comes from where you touch it. It radiates outward from where you touch it, and that makes you feel more connected to it. Like it's a little bit more alive and responding to you, but it's not overbearing.

GIZ: I assume that when you're modelling stuff off the real world there must be tonnes of physics calculations that are going on in the background in order to keep the animations realistic. So what happens to Project Butter when all of that is involved? I assume it must take some significant processing power to keep everything running smoothly, or have you guys actually managed to light on the coding side?

MD: Well, first off, one of the wonderful things about computer science is that a lot of it is about finding patterns and approximations, so you don't have to run a simulation of everything, you just find a motion curve that looks right. So then you don't actually have to run a physics engine. Once you know what motion curve you want you just run the motion curve. A motion curve is basically just a table lookup, it's pretty much as fast as anything you can do. So if you're going to go from Point A to Point B, and then you're going to do a table lookup instead of a linear interpolation, it's actually no more cost on the processing side.

And the other thing is that we're not trying to simulate the real world. We're not trying to limit ourselves by the real world. This is the magical thing that happens when we realise that we create a metaphor that speaks to the primal parts of your brain. We have these motion studies where you see these surfaces that will split apart and reform, and we know that no material in the world can do that. It doesn't exist, and maybe it even violates the conservation of mass rules because the material spreads bigger, right?

But the amazing thing is that your visual cortices don't process that. They have no concept of the conservation of mass. What they have a concept of is objects and edges and surfaces. So you look at these things that logically, you know are impossible, and yet they feel right. They feel plausible. You might say, it feels kind of magical—you know, maybe it's Harry Potter land—but it doesn't seem implausible. It can be impossible, but not implausible. And that's the power of it.

So when we talk about the physics of things, we're not trying to copy the real world. We're trying to make it natural for your brain, for you mind, while still unleashing this thing that software can do which is be this totally mutable, transformable, truly magical type of experience. So that's kind of the line that we're trying to dance on.

Big thanks to Matias Duarte for his time. You can read more about Material Design as well as peruse Google's style guidelines here.