In 2010, a pair of researchers published a controversial economics paper. It was cited by British MPs to justify austerity measures that sparked economic and employment crises, and anti-austerity protests—measures that the UN later called “punitive, mean-spirited, and often callous” inflicting “great misery.” In 2013, however, this widely influential paper was found to have been substantially off in its estimates, thanks in part to a simple spreadsheet error: specifically, “a few rows left out of an equation to average the values in a column,” the Guardian wrote at the time.
This famous foul-up is just one of many instances when digital predictions have let us down, creating a sharp contrast between the reality of things and what the numbers foretold.
For nearly 40 years, people who finance and shape world markets have relied on these kinds of predictions, using digital tools to calculate the potential risks, benefits, and long-term scenarios of each product or investment. Folks who have studied finance’s transition into technology, or who saw it first-hand, say these innovations can all be traced back to one game-changing kind of software: the spreadsheet.
“I liken spreadsheets to a computer game for executives.”
With the rise of spreadsheets and personal computers, the ages-old trading industry and “stock market”—which had previously relied on clay tablets, telescopes, or telegraphs for a competitive edge—has also built brand-new realms of monetary activity, often seeking to tie tomorrow’s revenues into today’s bottom line.
According to some experts, the notoriously imperfect spreadsheet could also be responsible for a certain fallibility that seems endemic to modern markets—in other words, creating vulnerabilities in our financial system and ways of using data that only hindsight can predict.
Number-crunching and risk: a (very) brief history
According to tech historian Martin Campbell-Kelly, the above-mentioned 2010 spreadsheet error became “an absolute calamity” for policymakers who had hailed the paper in the UK and the US. But it wasn’t exactly surprising.
Campbell-Kelly, professor emeritus at the University of Warwick and a well-known expert in computer history, said in a phone interview that relying on intel from electronic spreadsheets has always involved a certain amount risk.
“I liken spreadsheets to a computer game for executives,” he said. “They simulate real-world situations, and you can change the parameters to see how different financial scenarios play out.”
Like the large paper worksheets that earlier generations of accountants and financiers spread across tables to fill in at length (hence the new term “spreadsheet”), electronic versions have a fairly simple layout: large grids, arranged into columns and rows, allow users to log and compare their data side-by-side as numerical values, such as the cost of this or that product over time.
Unlike traditional worksheets, which required tens or hundreds of hours of doing complicated maths by hand, electronic spreadsheets have offered to do much of the work for users, using built-in formulas to calculate hundreds of values according to numerous variables.
When spreadsheet software became widely known in the 1980s, finance workers could suddenly spend less of their days using pen, paper, and the still-in-use HP-12C calculator to run figures, though the earliest programs could still take a couple of hours to complete calculations.
This technology also automatically presented pros and cons, Campbell-Kelly said. “With physical spreadsheets, it was much longer work, but you’d have a much more intimate knowledge of what’s actually inside it. With computer spreadsheets, there are underlying forms inside them that you likely won’t be aware of.”
Overall, he said, “There’s a real problem with spreadsheets, which has always been a real problem: they have very poor error-checking capabilities. You can check for some things, like circular definitions, where two variables are codependent in a way they shouldn’t be.”
“But if you have a mistake on a spreadsheet, it’s actually very hard to detect—not like with a computer program, where there are tools for checking the software in general. The average computer program is much more reliable than the average spreadsheet.”
“They are actually quite vulnerable to small errors, and very brittle, with no mechanism for telling you if they’ve gone off the rails.”
To wit, he said, “An analysis in the ‘80s found that a surprisingly high fraction of spreadsheets contained logical errors.” In short, these kinds of programming errors can cause a program to operate incorrectly—as by producing incorrect or unintended results, for example—without causing it to shut down or ‘crash,’ which would make it apparent that there’s a problem.
Campbell-Kelly pointed out that the quality of the data that users plugs into spreadsheets can easily make or break the reliability of the results, too.
He recalled a phrase that’s come to signify this kind of problem among programmers and other number-handlers: “Garbage in, garbage out.” In other words, if the information on a given topic that’s entered into the program is shoddy or narrowly representative, and/or the program is error prone, the data it produces will probably be bunk.
Yet we inherently seem to trust these kinds of figures, hence the alternate version of that phrase: “Garbage in, gospel out.”
Spreadsheets drive the digital revolution, despite rough edges
By the late 1970s, workers on Wall Street were already using rudimentary email processes, putting them among the first to adopt personal computers outside of the sciences, academia, and home hobbyists, according to technologist David Wolfe. But finance’s love affair with computers really took off in the early ‘80s when spreadsheets arrived, and firms began providing in-house employee training for this tool—one that, even today, surprisingly few of us feel comfortable with.
At the time, those groundbreaking programs included VisiCalc—the first-ever digital spreadsheet, and “the ‘killer app’ for the Apple II,” Wolfe said—along with Lotus 1-2-3, which offered expanded capabilities in some areas, and similarly boosted IBM’s PCs.
According to Wolfe, co-director of the Innovation Policy Lab at the University of Toronto’s Munk School of Global Affairs and Public Policy, “The spreadsheet immediately started getting picked up by the financial services industry for its ability to do ‘what if’ calculations, like: If the rate changes from 1% to 2% percent, how will it affect my investment capital?”
Almost immediately, Wall Street also started using the technology to create new, more complex kinds of trading and investments. “It became an incredible time saver-tool, but also started to play into the creation of derivatives,” Wolfe explained. This kind of transaction, which dates back thousands of years in its earliest forms, involves an agreed-upon value for certain resources between two parties over time, and is often used in the hopes of stabilising markets or (perhaps more commonly) raking in percentages if the resources’ real-world value goes up.
“Mortgage derivatives, future derivatives—they came out of the ability to bundle products together, and calculate derivative values for them,” Wolfe said. “That’s where the intersection between personal computing and spreadsheets seemed to happen.”
Jeffrey R. Diehl, Executive Director and CEO of Rhode Island Infrastructure Bank, remembers the transition to spreadsheets clearly, as well as the rise of derivatives and other higher-tech trading: at the time, he Diehl had just completed his MBA and taken a job at J.P. Morgan.
Diehl said in a phone interview that he thinks spreadsheets have ultimately helped strengthen and expand the finance industry for the better, but it wasn’t always smooth sailing, especially at the start. And despite the company’s computer training program, Diehl said, his boss had always encouraged him to check the numbers by hand, and to develop the ability “to glance at them and see if they’re off.”
During those years, Diehl primarily worked with currency and bond exchanges, which require complex calculations around international markets. One day, when he was preparing to initiate a first-of-its-kind digital exchange, he took a last glance at the numbers and indeed could tell that they were off. He tried to warn his colleagues, who believed the numbers; late that night, Diehl got a phone call: he had been right, and quickly helped fix the error.
Diehl said it also was apparent that spreadsheets weren’t really designed or equipped at first for finance’s purposes, either. “Because of quirks in the way the programs worked, and limitations in the hardware, we had to be creative in how we could use them.”
“What I’ve found about how people interacted with spreadsheets in the ‘80s is that they didn’t really think of them as replacing or displacing human judgement, but rather as a kind of superhero cape, or prosthesis.”
For example, when J.P. Morgan workers were using Apple IIs, they discovered that they’d need to calculate financial periods in different-sized chunks because the maximum number of cells might be 100 or 200 across.
“Then we got one of the first IBM PCs, and we moved from VisiCalc to Lotus 1-2-3,” Diehl said. “One thing that caught us by surprise, anecdotally, is that VisiCalc has its first financial period designated as Period 1. In Lotus 1-2-3, the first cashflow period was Period 0.”
Around that time, his company was considering whether it should move from its Wall Street offices to the World Trade Center, as a long-term strategy. “We were analysing lease vs. buy scenarios in 75 years’ worth of cash flow,” Diehl explained.
“The guy I worked for—the brain trust in our corporate finance group, relied on by the [company’s] treasurer at the time—made me go through all of those scenarios by hand on an HP calculator. That was 300 cash flows in a 75-year period ... But that was actually how we discovered that there was a difference in how Lotus 1-2-3 treated cash flows versus VisiCalc.”
By the early 1990s, however, both programs had lost the throne to another piece of software, which came packaged with a whole office suite on that company’s machines: Microsoft Excel.
It wasn’t the best or most innovative spreadsheet, according to most tech historians. But in the next three decades, Excel would become ubiquitous, from Wall Street to small businesses.
Throughout the ‘80s and early ‘90s, big stacks of printouts were also the “go-to item” on Wall Street during meetings, according to Will Derringer, assistant professor of science, technology, and society at MIT, and a former investment banking analyst at the Blackstone group.
Those stacks of paper would often go unread, but they demonstrated that high-tech calculations had been done. “It became important to perform your engagement with that way of thinking,” Derringer said.
“Spreadsheets give this impression of objectivity, a certain accuracy, because they look complicated: there are lots of digits, you can put all these ideas and data points together and create these elaborate structures, and produce what seem to be incredibly precise answers.”
“But they are actually quite vulnerable to small errors, and very brittle, with no mechanism for telling you if they’ve gone off the rails.”
The ‘Junk Bond King’ falls, but Wall Street soldiers on
After spreadsheets hit Wall Street, financial innovations abounded, but with mixed outcomes. Some new tools or techniques faced criticism, while some innovators, once praised for their visions, went down in flames. Others became integral to the market we know today.
“What I’ve found about how people interacted with spreadsheets in the ‘80s is that they didn’t really think of them as replacing or displacing human judgement, but rather as a kind of superhero cape, or prosthesis,” Derringer said.
“They offered ways of thinking that financial practitioners were naturally inclined toward, [as they] already thought in terms of what’s going to happen in the future, what will it mean today, and how will that reflect back.”
“Calculators could do a simple version of this, but spreadsheets let you use that idea—projecting things that will happen in the future, and assigning value today—in an incredibly powerful way.”
Throughout the ‘80s, for example, one of the most noticed and admired figures experimenting with numbers on Wall Street was Michael Milkin, a Wharton MBA-recipient and record-breaking financier.
“Milkin, the so-called junk bond king, in some ways created the DNA of an entirely new financial market, with his use of things like high-yield bonds, and was by far the most influential financier of 1980s,” Derringer said. “He used these particular instruments as he’d more or less invented them to, and developed ways to ‘sell’ a lot of the new forms of financial transactions, such as leveraged buyouts and hostile takeovers.”
Milkin later pleaded guilty to securities and tax violations for his part in a Wall Street scandal near the decade’s end. As a result, Milkin served 22 months in prison and paid hundreds of millions in fines; as of 2018, he also had a net worth of around $3.7 billion (£3.1 billion).
Despite Milkin and his colleagues’ then-legal troubles, many of those same techniques also lived on into the ‘90s, 2000s, and beyond. According to Harvey A. Silverglate, Milkin’s one-time lawyer, “Milken’s biggest problem was that some of his most ingenious but entirely lawful manoeuvres were viewed, by those who initially did not understand them, as felonious, precisely because they were novel–and often extremely profitable.”
“They really make clear that people at the top of financial institutions did not fully understand how instruments designed and used by people under them worked. They made billion-dollar bets without understanding what they were doing.”
During the same window, while Milkin was charming investors and earning billions, another tool was invented using spreadsheets that would change finance forever, Derringer said: a financial formula known as ‘present value’ calculation, and “considered by some to be the most important invention in the history of modern finance,” according to Derringer.
In short, he said, “It allows you to put a value on money that’s going to be achieved in the future. A standard financial calculation that allows us to say, what is the worth today of $100 in 20 years; or, what would you have to save today to have $100 in 20 years, including compound interest?” Previously, that kind of multifaceted calculation would have effectively been impossible; with spreadsheets, it could be performed in minutes.
It’s also one of many signs that financiers in the 1980s were looking ever further into the future with their transactions, Derringer said—novel at the time, but the normal state of things today.
“When [present value calculation] first arose, people thought it was kind of strange and counterintuitive,” he explained. “Now, in finance, people treat it like gravity—so obvious that it’s impossible to imagine how anyone would ever think otherwise.”
Life and finance in a post-Big Short world
In 2010, journalist Michael Lewis made waves with his nonfiction book The Big Short: Inside the Doomsday Machine, which offered a closer look at the 2007-2008 financial crisis, as well as those who dared to bet against the market. Among other things, the book detailed some of the trading behaviours and tools that drove this economic catastrophe.
These include leveraged buyouts and consolidation, collateralise debt obligation, and mortgage-backed securities—methods that can be traced back to spreadsheets’ arrival, and are part of what David Wolfe calls “mathematisation in the financial services industry.”
By shifting much of the actual number work onto techno-tools, Wolfe said, financial firms have widened the gap between decision-makers and the real-world impacts their work achieves.
Regarding his colleagues who worked on The Big Short, Wolfe commented, “They really make clear that people at the top of financial institutions did not fully understand how instruments designed and used by people under them worked. They made billion-dollar bets without understanding what they were doing. They knew the upsides, but not the downsides.”
Incidentally, the so-called “junk bond king” may have argued something similar, Derringer said; in a nutshell, Milkin hinted at technological determinism, a concept that lives on at big firms today.
“Apparently at one point he said that, actually, if you wanted to identify who were the real culprits for the new, freewheeling, sort of morally boundary-crossing forms of finance that developed in the ‘80s, the people to really look at were the inventors of VisiCalc,” Derringer said.
“People invest in those models until they break, and break spectacularly.”
“It’s really interesting to hear financiers try to shift the blame onto the technology,” he continued. “That’s a broader phenomenon we see in a lot of technical domains: an attempt to disavow a certain kind of human responsibility for certain social changes and social problems by claiming that it was all in the technology, beyond anyone’s control. And they’ll play both sides of it.”
According to Derringer, the increased embedding of these kinds of number-crunching tools in our social systems also poses multiple, meaningful risks, whoever’s pointing the finger; for one thing, the way that decisions are made and services offered for millions of people could become—like today’s financial market—even more opaque.
Overall, Derringer said that his research on finance and tech trends over the past few decades suggests “very schizophrenic” behaviour toward data and numbers in our society.
“On one hand, we’re very used to granting a certain kind of authority to quantitative evidence, and treating it as stronger than qualitative evidence. But at the same time, we all know examples of how statistics are fallible, and can easily go awry.”
“It’s an interesting phenomenon: we have deep faith in numbers and also deep scepticism that seem to be intertwined. We’re much more measured in how we treat qualitative evidence, like first-hand testimony, for example, but with numbers, it’s much more dichotomous.”
“The phenomenon with spreadsheets has really interesting examples,” Derringer added. “People invest in those models until they break, and break spectacularly.”
Derringer said he’s hopeful that the rise of data journalism, among other things, could help our culture gain a healthier mindset (or at least more measured) around what data really provides.
“One of its promises as a new mode is kind of recalibrating some of our expectations, and paying deep attention to quantitative evidence but in a much richer context, which considers the data and assumptions that went into it.”
Critics of that controversial 2010 economic austerity paper, which faced challenges on multiple levels since its publication, seem to agree. As economist Paul Krugman opined for the New York Times in 2013, the paper’s problems went far deeper than its spreadsheet cells—into the human realm, in fact: “First, they omitted some data; second, they used unusual and highly questionable statistical procedures; and finally, yes, they made an Excel coding error,” he wrote.
Fund manager David Schuchman put it another way for Forbes that year, arguing that focusing on the study’s Excel error misses the point. “In reality, the only lesson to be drawn from this episode is that academic economics, like many social sciences, is grounded in hubris and pseudo-precision,” he wrote. “And that the modern urge to demand an academic study to ‘prove’ or justify inherently complex and ambiguous decisions is antithetical to clear thinking.”
Going forward, Derringer said, “I hope we can treat numerical evidence in a much richer, more sceptical, and more respectful way.”
Janet Burns is a freelance writer based in Brooklyn.
Featured image: Illustration: Elena Scotti (Photos: Getty Images)