Today we woke up to the sad news that physicist Stephen Hawking has died at the age of 76. While Hawking was famous for a lot of stuff, one thing he often did was raise the alarm on the apocalypse. Believing that a self-inflicted disaster was a "near certainty", he was no stranger to dishing downers. Here are just some of the times that Hawking said the end is nigh.
This article originally appeared on 20th January 2016.
Speaking to the Radio Times recently ahead of his BBC Reith Lecture, Hawking said that ongoing developments in science and technology are poised to create “new ways things can go wrong”. The scientist pointed to nuclear war, global warming, and genetically engineering viruses as some of the most serious culprits.
“Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years,” he was quoted as saying. “By that time we should have spread out into space, and to other stars, so it would not mean the end of the human race. However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”
Sure, Hawking claimed to be an optimist about humanity’s ingenuity in coming up with ways to control the dangers. But he had no problem coming up with all these ominously specific, horrible things that could happen to us in the future. He wasn't wrong to highlight these risks — but in terms of what we’re actually supposed to do about, his answers are frustratingly simplistic and opaque, in sharp contrast to his predictions of doom.
Hawking’s warnings go back at least a decade. In 2006, he posted a question online asking the question:
In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years?
The comment touched a nerve, prompting more than 25,000 people to chime in with their personal opinions. A number of people expressed their disappointment with Hawking for failing to answer his own question. As one respondent wrote, “It is humbling to know that this question was asked by one of the most intelligent humans on the planet ... without already knowing a clear answer”. To clarify, Hawking later wrote, “I don’t know the answer. That is why I asked the question.”
The following year, Hawking warned the audience at a news conference in Hong Kong that “life on Earth is at the ever-increasing risk of being wiped out by a disaster, such as sudden global nuclear war, a genetically engineered virus or other dangers we have not yet thought of”.
Some of Hawking’s biggest concerns had to do with AI, which he says could be “our worst mistake in history”. In 2014, Hawking, along with physicists Max Tegmark and Frank Wilczek, described the potential benefits of AI as being huge, but said we cannot predict what will happen once this power is magnified. As the scientists wrote:
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
But not all of the dangers cited by Hawking are home grown. In addition to asteroids and giant comets, Hawking has said we also need to worry about an alien invasion. As he told the Sunday Times back in 2010:
We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach..... If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.
It’s evident from this and other quotes that Hawking had a particularly grim view of humanity. In the book Stephen Hawking: His Life and Work, he argued that computer viruses should be considered a new form of life: “Maybe it says something about human nature, that the only form of life we have created so far is purely destructive. Talk about creating life in our own image.”
And as Hawking liked to stress, we need to flee the sinking ship. To guarantee our long term prospects, he has argued time and time again that we need to get off this planet and start colonising other worlds, saying “we have no future if we don’t go into space.”
To be fair, Hawking was the world’s most famous scientist, so anything he says is bound to get extra media attention and scrutiny. And his ideas weren't emerging from a vacuum (or a black hole, for that matter). Over the past 15 years, an increasing number of European scientists — many of them based in the UK — have become concerned about so-called “existential risks”. While once the ruminations of alarmist Chicken Littles, the subject has now crept into academia and formal institutions.
Oxford philosopher Nick Bostrom kicked it all off in 2002 with his highly influential paper, “Existential Risks: Analyzing Human Extinction Scenarios.” Bostrom argued that accelerating technological progress is shifting our species into a dangerous — and potentially insurmountable — new phase, with emerging threats that “could cause our extinction or destroy the potential of Earth-originating life.” Since the paper’s publication, the term “existential risks” has steadily come into general use.
Sir Martin Rees giving a TED talk: Can we prevent the end of the world?
In 2003, esteemed physicist Sir Martin Rees published a book on the topic: Our Final Hour: A scientist’s warning: How terror, error, and environmental disaster threaten humankind’s future in this century-on earth and beyond. Another influential book came in 2008, Global Catastrophic Risks, which was edited by Bostrom and Milan M. Cirkovic.
In Britain, the potential for existential risks is being studied by philosophers, scientists, and futurists at Oxford’s Future of Humanity Institute, and at the University of Cambridge’s newly minted Centre for the Study of Existential Risk. The subject hasn’t really gained much traction elsewhere, though it is a concern of the US-based Institute for Ethics and Emerging Technologies.
Top image: Illustration by Jim Cooke, photo by AP; middle image: Lwp Kommunikáció