There are few gambles in the tech world as big spending billions to build a new computer processor from scratch. Former AMD board member Robert Palmer supposedly compared it to Russian roulette: “You put a gun to your head, pull the trigger, and find out four years later if you blew your brains out.” Six years ago AMD loaded the gun and pulled the trigger, dramatically restructuring itself internally in a mad bid to escape a disaster of its own making. Now we’ve seen the results and instead of dying, AMD has a savvy new CPU microarchitecture, Zen, that’s the foundation of the shockingly good new series of Ryzen processors. They’re so good, in fact, that they could pose a real challenge to Intel’s incumbent dominance and change what the computer market looks for the next few years.
“I think Intel kind of woke up and said ‘hey you know we’re in a battle all of a sudden,’” Linley Gwenapp, an analyst, and editor of the Microprocessor Report, an industry magazine, told Gizmodo. “So they’re not going to just stand by and let AMD storm into the market.”
But Intel has its work cut out for it. In January at CES in Las Vegas, I met with both companies and the mood could not have been more different. Intel seemed to spend the entire consumer electronics show apologizing for Spectre and Meltdown, a series of catastrophic security vulnerabilities that affected every one of the company’s CPUs. It seemed to barely have time to tease all the cool stuff it had coming out in the next few months. AMD, meanwhile, was a peacock—fluttering its feathers and bragging about its plans to continue one of the most aggressive launches in CPU history. The fact that it was vulnerable to Meltdown was almost an aside. AMD was way too busy being thrilled to be back, doing something very cool, and not being the one with all the bad press for once.
“I think it’s been very successful when you look at where they were a year ago,” Gwenapp said of AMD. Because even just a year ago Intel was in rough shape. According to Gwenapp, AMD had a little as 1.5 percent of the server market share last year. Intel had the other 98.5 percent. Sure it was making the chips that power every single Xbox One and PS4, and yes it’s the exclusive provider of GPUs for Apple machines, but to anyone who was paying attention, it sure seemed like AMD was a shadow of its former self. After all, the company was once considered Intel’s only serious rival in the CPU space, and until Zen was announced, AMD had been virtually silent on that front, effectively capitulating the multi-billion market for the chips that serve as the brains for everything from your computer to the servers powering your favourite websites.
AMD’s last big push in the CPU space came with a microarchitecture called Bulldozer back in 2011. In the tech world that’s an eternity. To put it in perspective, the same year AMD was last seriously attempting to compete with Intel, Apple launched the iPhone 4s, and Intel launched the second generation of its Core architecture. Intel is now on generation eight and expected to launch its ninth by the end of the year. It’s been eons since Intel and AMD were serious competitors.
One of the biggest reasons it’s taken so long for AMD to compete again is because that last big CPU launch was disastrous. Bulldozer obviously wasn’t intended to be awful. “It wasn’t as if somebody architected something that people thought wouldn’t be successful,” Patrick Morehead, Founder and Prinicipal Anaylyst at Morehead Strategy, told Gizmodo. Before becoming an analyst Morehead was a VP at AMD for nearly ten years and departed at the same time Bulldozer came to market.
According to Morehead, Bulldozer was always intended to be a high octane kind of CPU that blazed past its Intel competitors thanks to its high clock rate (or clockspeed). The higher the clockspeed the more cycles of processing a CPU can potentially undergo in a second, and according to Morehead, AMD had planned a clock rate of 5GHz, which is over five billion processes every second. Intel’s competitor at the time shipped CPUs with a much lower clockrate—only a handful could hit 4GHz.
“That’s why when people say ‘oh we didn’t get out of Bulldozer what we would have wanted’ it’s because that was supposed to be a very high frequency part,” Morehead said. In fact no CPU in general production from AMD or Intel has ever hit a clockrate of 5Ghz or higher. The only CPU to ever go into production with such a high clockrate is the IBM zEC12 released in 2012 and intended solely for IBM mainframes.
“It was never really competitive.”
When Bulldozer went into production it had a base clockrate of just 4.2GHz with a turbo clock rate (which is enabled when the processor needs to crank through a little data very quickly) of 4.5GHz. And the respectable clock rate couldn’t compete with Intel’s offerings in the same space. The reaction from reviewers was brutal.
Anandtech’s Anand Lal Shimpi wrote that he was “not sure it’s quite ready for prime time” and that “Bulldozer simply does not perform.” Tom’s Hardware noted that it was often “embarrassingly bested by its predecessor” while Tweaktown lamented that “you can’t help but almost feel disappointed with what’s going on here today.”
When CNET did a round up of the geekier reviews of Bulldozer it reported that “Bulldozer does not appear to offer AMD a competitive resurgence.” But it was at least a little hopeful AMD could stick around in the CPU space, if for no other reason than “to keep Intel honest.”
Yet that didn’t happen. AMD had come out of the gate with a troubled architecture and the gap between it and Intel only grew wider over each successive annual iteration. The company tried “to start with something that doesn’t work well and then kludge it up to make it better,” Gwenapp told Gizmodo. But with each new iteration based on the already underperforming Bulldozer architecture it couldn’t keep up. “It never really was competitive.”
It takes guts
And a CPU absolutely must be competitive—either cheaper, faster, or more power efficient. It is the core brain that powers every computer you use—from your phone to your laptop to your TV. A CPU defined by its microprocessor—which is in many cases a synonym for CPU. It’s a wafer of silicon that then has billions of microcircuits etched to its surface and connected to one another via ions of copper. The specific way a CPU’s silicon microcircuits—or transistors—are arranged and connected via copper is called its “architecture.”
The very first microprocessor, the Intel 4004 released in 1971, was built on a die, or foundation, that was just 10,000nm. That is as thick as a very thin strand of human hair. The Ryzen CPU is just 14nm. A small process means it can move energy across those copper ions faster—effectively meaning that the smaller a CPU gets the faster it can be. But the smaller a CPU the more delicate it becomes. So in the bid to make the fastest and most powerful processors, CPU designers must also make them smaller and more intricate. And that means always optimising and seeking better efficiencies.
Consequently each CPU is designed for a specific “instruction set.” That is the most basic code that works between a CPU and a devices software. Common CPU instruction sets you’ve heard of are ARM and x86. ARM CPUs are usually found in phones and tablets—the CPUs that power you Samsung Galaxy S9 and your iPhone X are ARM-based processors. x86 CPUs are usually reserved for computing devices that need a lot more power and a little less battery life. Intel and AMD’s CPUs for laptops, desktops, and servers are x86, and unless specifically specified, as are AMD’s competitors, including its new batch of Ryzen processors.
Beyond the instruction set CPUs, are actually pretty simple and just made up of a few key components: cores, CPU caches, and the system bus. The cores are the engine of the CPU, they’re the things doing all the actual processing and when you talk about the clockrate of a CPU you’re talking about how fast those cores are operating. The cores do all the actual processing required by the CPU, but they’re very busy processing all that data you feed them, so they need to cache bits of it away for super quick references.
This is where CPU caches come in. They are the memory. When you glance at the specs for a computer you’re planning to buy you might see a mention of these caches, usually labelled L1, L2, and L3. They’re important because they’re what the cores actually read and write data to while processing. That data, after being processed on the CPU caches, is then sent back out to the general memory (also known as RAM) for the computer, and then possibly onto the storage drive, and it all gets to your eyeballs because it goes through the graphics processing unit.
The CPU caches and the cores communicate to one another via the system bus. It’s called a bus because it’s essentially busing information from one part of the CPU to another. And sometimes it’s not just moving data between the cores and the CPU caches. A system bus will also move data to and from a graphics processor if its squeezed onto the same chip too. At its most rudimentary, consider a system bus to be like that little green board you used to snap all the Lego bricks onto.
There are a lot of different kinds of system buses, and every major CPU maker has their own system bus architecture. So Intel has what it calls Front Side Bus, and until Zen, AMD used tech called HyperTransport.
Now, focusing on any one of these components can improve the speed of a CPU,. When AMD created Bulldozer it put all its effort into the clockrate of the cores of the CPU. The clockrate is the frequency at which a computer processing unit can actually process cycles of data. More processes means a faster CPU and for a very long time improving the clockrate of a CPU was the primary goal of designers.
But in the early 2000s, Intel brought a CPU to market with a clockrate of 3GHz, and since then nearly all CPUs have hovered around that clockrate. One of the reasons for that stall in clockrate increases is because the higher the clockrate the more heat the CPU produces. Getting rid of all that heat can be costly—especially if you’re trying to pack that CPU into something super slim and fanless like a Surface Pro.
So in recent years, CPU designers have turned other parts of the CPU to improve speed, and when AMD made the decision to essentially scrap Bulldozer and put most of its resources into what would be come Zen, one particular component of the CPU would get a brunt of the focus: A new system bus architecture called Infinity Fabric.
But before AMD settled on Infinity Fabric, before it even settled on creating a new CPU and acknowledging Bulldozer’s failure, the company needed a new CPU design team.
Their powers combined
Enter Mark Papermaster, current Chief Technology Officer for AMD. He joined the company in late 2011, just after Bulldozer’s launch, and according to him “it was not a coincidence.” Papermaster had been brought in to help save a company that saw its stock prices plummet more than 60-percent in a year thanks in part to Bulldozer’s poor performance.
Before joining AMD Papermaster was primarily noteworthy for two things. In 2008 he was the subject of a highly publicised lawsuit with IBM, where he’d worked since 1982. IBM was concerned that Papermaster, then a recent hire at Apple, would potentially carry trade secrets with him from IBM to Apple. The case settled out of court. Then Papermaster made headlines again in 2010, when he reportedly left Apple after just 15 months due to disagreements with then CEO Steve Jobs.
Papermaster entered AMD in October 2011 and the company seemed to be in a state of free fall, with major rivals like Qualcomm, Apple, and Samsung snapping up dissatisfied employees. The flight of employees struck a nerve with some. “That just breaks my heart,” former AMD founder Jerry Sanders told Forbes in February 2012.
The company was in disarray and Papermaster was himself contributing to it. Not because of his previous reputation, but because one of his first orders of business was to dramatically restructure the CPU design team. He’d seen how bad Bulldozer was doing and it was clear to him it wasn’t the processor AMD needed to succeed. “I saw the market going to high performance,” he told Gizmodo. “To do that we needed a new CPU.” Instead of something that could just handle your web browsing, Papermaster was certain the market needed something that could handle gaming, or processing huge 3D files, or rendering big 4K videos.
And for Papermaster, a new CPU meant a major restructuring of AMD’s design group. Traditionally the CPU design group was was broken up into two basic teams. “Prior to the Zen architecture they really had two kind of parallel design tracks,” Natalie Jerger, a professor of Electrical and Computer Engineering at the University of Toronto, told Gizmodo. One processor was intended for low powered laptops and desktops—the cheapest of the cheap. The other was intended for servers and high end desktops. The server class processors were Bulldozer, and the low power CPUs were what was called the Bobcat family.
“And so what [Papermaster] did was he merged these two teams together,” Joe Macri, Corporate Vice President and Product CTO at AMD, told Gizmodo. Macri was one of the leaders of that new, larger team.
To hear him tell it, the merging of the two teams (and the removal of anyone made redundant) seemed like a necessity for the task Papermaster assigned to the larger group. “We didn’t need to start from scratch,” Macri said. “So that was really more of a merge than taking an existing team, rip it apart, and put it back together. But in the process we got a lot of efficiencies. We were able to apply more people to a single function and get much more design bandwidth.” Bandwidth that would be necessary to accomplish Papermaster’s goal of building a CPU that was in every way better than that rubbish it had just released.
Papermaster was soon aided by another new exec at AMD: Lisa Su, who joined as Senior Vice President and General Manager in January 2012. Su had come from Freescale, another CPU maker that has since merged with NXP Semiconductors. Her first focus was on AMD’s graphics and gaming markets—where the company was actually having some successes. Su and then CEO Rory Read, secured deals with both Sony and Microsoft to create the chips at the heart of the Playstation 4 and Xbox One.
Those deals were enormous, and many sources cite AMD’s work on custom chips for Sony and Microsoft as one of the reasons the company both survived the CPU drought and actually managed to produce such an effective CPU with its current architecture, Zen.
Su’s work earned her quick notice and by October of 2014, a little more than two years after she’d joined AMD, she was elevated to CEO. Su and Papermaster then proceeded to pour resources into the development of Zen. According to Macri it was over two million man hours spread over four years.
For Papermaster, putting all those resources into a new CPU design, and having that team effectively ignore Bulldozer and its successive iterations, was absolutely vital. He needed a team that would have the breathing room “to get those bright ideas that they had.” He felt that some of AMD’s coolest ideas for new CPUs “had been on the shelf because there had been such a demanding schedule of new revision after new revision.”
“You know sometimes you can get a ‘let’s do a from scratch design and throw everything out.’ The team didn’t do that.” Instead the team pulled from AMD’s extensive back catalog to use ideas that hadn’t been given a lot of focus over the years. One such idea was improving on the system bus architecture HyperTransport and creating a new one: Infinity Fabric.
Mark Papermaster told Gizmodo AMD called it “the hidden gem,” while Jim Anderson told Gizmodo it was the “secret sauce.”
This is where a talented engineer named Jim Keller comes in. He’s a mythical sort of figure within AMD. Nearly every person I spoke to cited Keller as one of the primary people behind the success of Zen’s design, and importantly behind the design of Infinity Fabric.
Papermaster hired Keller in 2012 and put him in charge of the processor group. Keller was no stranger to AMD. He’d worked there previously in the ‘90s and his short tenure had achieved legendary status in the geeky processor community online where he is credited as the principal figure behind AMD’s K8 microarchitecture which allowed the company to be viciously competitive with Intel in the late ‘90s and early 2000s.
Keller and Papermaster had worked at Apple together, where Keller is credited with designing the first CPUs Apple built itself for its iPhones and iPads. When he came to AMD, both he and his new employer agreed to come under a limited time contract. He would work at AMD for three years, and at the end of that contract he would depart, which he did in 2015. He now designs chips for the autopilot feature in Tesla’s cars, and declined to comment for this story.
Keller, and in particular his work on mobile CPUs and previous work on HyperTransport, were key. Without Keller to speak for himself we’re left with glowing praise from AMD execs like Mark Papermaster and Jim Anderson, and from analysts like Linley Gwenapp who said “I think Jim and his team really has turned around the company from a technology standpoint.”
In a rare public interview in 2014 with Papermaster, you start to get a sense of Keller’s enthusiasm and motivation. “I joined AMD because I love processor design and I love complicated systems design and AMD was looking at taking a big swing on the next generation,” Keller said.
Keller seems calm and polished up on the little stage, but he tends to ramp up in response to geekier questions—growing sharper and more and more animated as he talks about architectures and pipelines and cores and the other supremely technical stuff that most of us would yawn at.
The interview is a great look at where Zen was in 2014—three years before its launch. At the time it was only approximately two years into its development, and still being designed alongside K12—AMD’s planned next generation ARM CPU. In the interview, Keller actually cites that co-development as an asset to Zen and specifically to the cores at the heart of each Zen CPU.
However Keller doesn’t hint at the most important component until near the end of the 15 minute session when he mentions a next generation “fabric” that would be found in every CPU from mobile devices up to servers and that would allow AMD to scale performance at an enviable rate. That fabric would be known, by 2017, as Infinity Fabric.
“AMD has found a way to have their cake and eat it too.”
He explains that the fabric would allow AMD to employ a “chiplet” approach to design. Instead of building one processor on a single die AMD could package a series of processors together—connecting them via the Infinity Fabric system bus. This would make the CPUs way cheaper to produce and ideally as fast as a CPU, like AMD’s previous CPUs and most of the ones from Intel, that included all the cores on a one die.
“Historically,” analyst Patrick Morehead told Gizmodo, “when you put things in packages it slows down the process. But AMD has found a way to have their cake and eat it too by coming up with this packaging that sets very high performance.”
And this packaging also meant AMD was designing a very different kind of CPU from its competitor, with a different, and according to AMD, much more efficient methodology. “We came up with a very regular structure for it,” Joe Macri told Gizmodo. “We basically had a data point and a control point and everybody that bolted into [Infinity Fabric] bolted into it the same way.”
Macri claims Infinity Fabric actually sped up design of the processors across the board. The modular design it gave the Zen architecture meant AMD didn’t need to reinvent the wheel for every kind of CPU. “One of the biggest challenges in designing a core is actually validating it,” Macri said. And thanks to Infinity Fabric AMD could quickly validate, or confirm the operation of, a whole slew of slightly varied designs ranging across their entire offering of Zen processors.
Think of Infinity Fabric as the little snaps in Lego blocks and all the components of a CPU as the blocks themselves. AMD could take a few blocks to build a desktop CPU, or tack on more to create a big multi-core server processor, or snap on a GPU to make a wicked fast chip intended for laptops.
Or, as happened in 2016—an entirely new type of CPU for AMD.
Released in 2017, Threadripper—AMD’s perhaps most eye catching CPU launched on Zen—was never on any of Papermaster’s aggressive roadmaps. Instead it was a passion project by a very small group of engineers—like maybe four. A “very, very small group,” Robert Hallock, AMD Evangelist for APUs and CPUs said. He shared the same row of cubicles with them.
The company had wanted to do a really big and powerful desktop CPU for a while, but it had never been feasible technologically or financially speaking. Yet this tiny group of engineers realised that they could potentially take advantage of the chiplet design made possible by Infinity Fabric to build an incredibly powerful desktop CPU to rival Intel’s popular top-end Xeon CPUs. Yet instead of the 8 cores found on the Xeon processors used in beefy workstations, the Threadripper would have a whopping 16 cores—the most ever put in a mass-produced desktop CPU.
Such a CPU should take years to design, test, and push into the market place right? According to Jim Anderson the tiny team didn’t even approach him about it until January 2016, when they pulled him aside during CES in Las Vegas. He said he immediately recognised the sheer cool factor of these chips and insisted on putting the project on a fast track, taking it from their concept to production in just a little over a year.
The project was steeped in secrecy, referred to only as Summit P3—which many assumed was a minor offshoot of Ryzen—then call Summit Ridge, and according to Hallock “you were not even allowed to send emails” about the project. Everything, from the look to the speed results were closely guarded until launch and even the teams trying to convince PC makers like Dell to adopt the CPU had almost no details on the product itself.
Yet when Threadripper did finally launch in the summer of 2017, it shocked critics and consumers alike, and was quickly adopted by some of the largest PC makers in the world, including Alienware’s top-of-the-line gaming desktops.
But it wasn’t just the audacity of Threadripper’s design that intrigued people. Yes 16 cores running 32 threads in a desktop PC anyone can build is impressive, but Intel announced an 18-core i9 processor just two weeks after Threadripper was announced. What was surprising was that it was cheap—$1,000 to the $2,000 for Intel’s 18-core processor—and the damn thing was actually as fast.
More than anything else, this is what has surprised observers about AMD’s latest CPU architecture: The chips are cheaper and usually faster than their Intel competition. You can look at our own reviews of AMD processors over the last year as proof (here, here, and here).
Yet AMD isn’t just producing fast, cheap processors processors, it’s also releasing them all of a sudden at a surprisingly fast clip. In 2017 AMD released four categories of CPUs based on Zen: Ryzen desktop CPUs, Threadripper, Epyc CPUs intended for servers, and Ryzen APUs for laptops (APU is the AMD term for a CPU with integrated graphics). We’re only four months into 2018 and AMD has already released incredibly fast and cheap APUs for the desktop, additional mobile APUs for laptops, and even a second generation of Ryzen CPUs for desktops.
Jim Anderson told Gizmodo he anticipates over 60 new laptops and desktops with Ryzen inside by the end of the year, with most of those systems being laptops, and most of those laptops coming in the next few months. “This will be the widest range of premium notebook systems that you’ve ever seen from AMD in the company’s history.”
How good those laptops are remains to be seen. So far there are only a handful of AMD-based laptops actually in the marketplace, and we still don’t have a good sense of how well a mobile Zen processor handles power compared to its Intel rival.
But if it’s not quite as good at power management, it’s still okay. Because Intel has clearly noticed AMD’s strides into the CPU space and has responded accordingly. Last year Intel’s top of the line desktop CPU maxed out at 4 cores, but after AMD released a 6-core processor Intel quickly responded with its own 6-core device. And when AMD surprised us with the 16-core Threadripper, Intel raced to get the 18-core i9 into the marketplace.
A rivalry between the two companies can only ever be good for all of us hunting for our next computer. “You know,” Natalie Jerger told Gizmodo, “the the world is much more exciting if we have multiple people trying to innovate in this space.”