Physicist Stephen Hawking died yesterday at the age of 76. In the latter stages of his illustrious career, Hawking devoted a considerable amount of time and effort to issuing warnings of future threats—from the perils of climate change and nuclear war through to artificial superintelligence and alien invasions. And for this he was often ridiculed. But here’s the thing: Hawking was right—and it would be incumbent upon all of us to heed his advice.
When Hawking wasn’t talking about Euclidean quantum gravity, naked singularities, or radiation seeping from black holes, there’s a good chance the Cambridge Lucasian Professor of Mathematics was doing his best Chicken Little impersonation, telling a global audience that the sky above would soon give way, should we choose to keep ignoring it.
For Hawking, there was no shortage of ways in which the sky could fall. Early in his career he warned us about comets and asteroids, but by the mid-aughts he began to focus his attention on self-inflicted wounds. In 2006, at the age of 64 and with virtually nothing left to prove, Hawking posed the following open question online: “In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years?” Over 25,000 people chimed in with their own opinions, with many asking Hawking for his own advice. “I don’t know the answer,” he replied. “That is why I asked the question.”
That same summer, and in another sign of his mood shift, Hawking told a news conference in Hong Kong that life on Earth “is at the ever-increasing risk of being wiped out by a disaster, such as sudden global nuclear war, a genetically engineered virus or other dangers we have not yet thought of.” This time around, however, he volunteered an answer to the problem: colonise other planets or perish.
Hawking’s view of humanity had turned grim, and by 2010 he was warning of alien invasions, saying, “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.” In her 2012 book, Stephen Hawking: His Life and Work, author Kitty Ferguson wrote about the physicist’s view of computer viruses and why he thought they were a new form of life. “Maybe it says something about human nature, that the only form of life we have created so far is purely destructive,” said Hawking. “Talk about creating life in our own image.”
More recently, Hawking began to voice his concerns about artificial intelligence. In 2014, he famously said that AI was our “worst mistake in history,” and he signed an open letter warning of AI risks, alongside like-minded public figures including SpaceX CEO Elon Musk, physicist Sir Martin Rees, and biologist George Church. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” he wrote in an Independent op-ed with computer scientist Stuart Russell and physicists Max Tegmark and Frank Wilczek. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” A year later, he added his name to an open letter calling for a ban on autonomous killing machines.
By this point in his career, Hawking began to sound like a droning bell. His repeated calls for off-world colonisation in the face of such risks as “climate change, overdue asteroid strikes, epidemics and population growth” began to sound monotonous, and people began to tune him out—Gizmodo included. Except for the tabloids, of course, who cheerily repeated his dire warnings without pause.
Doom fatigue aside, Hawking’s death provides us with an opportunity to reflect on his warnings. As someone who has written extensively about the many ways humanity could end its tenure on Earth, I have very little to complain about when it comes to the late physicist’s views.
It sucks to hear, but he was right. We’re in big trouble. And we need to do something about it.
Last year, for example, Oxford’s Global Priorities Project listed asteroid impacts, global warming, artificial intelligence, and global pandemics among humanity’s most pressing near-term risks. With the shifting geopolitical climate, we have no choice but to worry—yet again—of nuclear war. Hawking’s view of malevolent aliens may have violated popular conceptions of friendly extraterrestrial visitors, but he was right to be terrified. At the same time, there’s no shortage of potential existential risks in our future, whether it be from a poorly programmed artificial superintelligence, a nanotechnology-powered apocalypse, or a retreat into a dystopian totalitarian dark age.
Of course, Hawking didn’t come up with these threats from thin air, nor was he the only one making such warnings. He just happened to be exceptionally vocal about it, and because of his extraordinary reach, he was able to communicate his message to a large global audience. That’s why he got branded as a Chicken Little, and why we became so inclined to associate these doom-and-gloom scenarios exclusively to him.
The best way to honour Hawking’s legacy, in my opinion, is to take inspiration from his admonitions and his persistency. He may have sounded misanthropic at times, but his warnings came from a good place. Despite the physical hardships he had to endure for so many years, Hawking never gave the impression that he gave up on his own life, and by virtue of his ceaseless warnings, he never gave up on humanity either.
Yes, the future looks scary—but as Hawking reiterated time and time again, the worst thing we can do when threats appear on the horizon is to plant our heads firmly in the ground.