As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
Illustration by Jim Cooke
As I was putting this list together, it became obvious to me that many of the technologies described below could be put to tremendously good use. It was important, therefore, for me to make the distinction between a technology per se and how it might be put to ill use. Take nanotechnology, for example. Once developed, it could be used to end scarcity, clean-up the environment, and rework human biology. But it could also be used to destroy the planet in fairly short order. So, when it comes time to develop these futuristic technologies, we’ll have to do it safely and responsibly. But just as importantly, we’ll also have to recognise when a particular line of technological inquiry is simply not worth the benefits. Artificial superintelligence may be a potent example.
That said, some technologies are objectively evil. Here’s what Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, had to say about this:
The idea that technology is neutral or amoral is a myth that needs to be dispelled. The designer can imbue ethics into the creation, even if the artifact has no moral agency itself. This feature may be too subtle to notice in most cases, but some technologies are born from evil and don’t have redeeming uses, e.g., gas chambers and any device here. And even without that point (whether technology can be intrinsically good or bad), everyone agrees that most technologies can have both good and bad uses. If there’s a greater likelihood of bad uses than good ones, then that may be a reason not to develop the technology.
With all that out of the way, here are 10 bone-chilling technologies that should never be allowed to exist (listed in no particular order):
1. Weaponised Nanotechnology
Nothing could end our reign here on Earth faster than weaponised — or severely botched — molecular assembling nanotechnology.
Image: scene from The Animatrix
It’s a threat that stems from two extremely powerful forces: unchecked self-replication and exponential growth. A sufficiently nihilistic government, non-state actor, or individual could engineer microscopic machines that consume our planet’s critical resources at a rapid-fire rate while replicating themselves in the process and leaving useless bi-products in their wake — a residue futurists like to call “grey goo”.
Nanotechnology theorist Robert Freitas has brainstormed several possible variations of planet-killing nanotech, including aerovores (a.k.a. grey dust), grey plankton, grey lichens, and so-called biomass killers. Aeorovores would blot out all sunlight, grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens would destroy land-based geology, and biomass killers would attack various organisms.
According to Freitas, a worst case scenario of “global ecophagy” would take about 20 months, “which is plenty of advance warning to mount an effective defence”. By defence, Freitas is referring to countermeasures likely involving self-replicating nanotechnology, or some kind of system that disrupts the internal mechanisms of the nanobots. Alternately, we could set up “active shields” in advance, though most nanotechnology experts agree they will be useless. Consequently, a moratorium on weaponised nanotechnology should be established and enforced.
2. Conscious Machines
It’s generally taken for granted that we’ll eventually imbue a machine with artificial consciousness. But we need to think very seriously about this before we go ahead and do such a thing. It may actually be very cruel to build a functional brain inside a computer — and that goes for both animal and human emulations.
Back in 2003, philosopher Thomas Metzinger argued that it would be horrendously unethical to develop software that can suffer:
What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea.
It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.
Futurist Louie Helm agrees. Here’s what he told me:
One of the best things about computers is that you can make them sum a million columns in a spreadsheet without them getting resentful or bored. Since we plan to use artificial intelligence in place of human intellectual labour, I think it would be immoral to purposely program it to be conscious. Trapping a conscious being inside a machine and forcing it to do work for you is isomorphic to slavery. Additionally, consciousness is probably really fragile. In humans, a few miscoded genes can cause Down Syndrome, schizophrenia, autism, or epilepsy. So how terrible would it feel to be a slightly misprogrammed form of consciousness?
For instance, several well-funded AI developers want to recreate human intelligence in machines by simulating the biological structure of human brains. I sort of hope and expect that these near-term attempts at cortical simulations will be too coarse to really work. But to the extent that they do work, the first “success” will likely create cripplingly unpleasant or otherwise deranged states of subjective experience. So as a programmer, I’m generally against self-aware artificial intelligence. Not because it wouldn’t be cool. But because I’m just morally opposed to slavery, torture, and unnecessary code.
3. Artificial Superintelligence
As Stephen Hawking declared earlier this year, artificial intelligence could be our worst mistake in history. Indeed, as we’ve noted many times before here on io9, the advent of greater-than-human intelligence could prove catastrophic.
The introduction of systems far faster and smarter than us would force us to take a back seat. We’d be at the mercy of whatever the artificial superintelligence decides to do — and it’s not immediately clear that we’ll be able to design a friendly AI to prevent this. We need to solve this problem, otherwise building an ASI would be absolutely nuts.
4. Time Travel
I’m actually not much of a believer in time travel (i.e. where are all the time travellers?), but I will say this — if it’s possible, we’ll want to stay the hell away from it.
It would be so crazily dangerous. Any scifi movie dealing with contaminated timelines should give you an idea of the potential perils, especially those nasty paradoxes. And even if some form of quantum time travel is possible — in which completely new and discreet timelines are created — the cultural and technological exchange between disparate civilisations couldn’t possibly end well.
5. Mind Reading Devices
The prospect exists for machines that can read people’s thoughts and memories at a distance and without their consent. This likely won’t be possible until human brains are more intimately integrated within the web and other communication channels.
Last year, for example, scientists from the Netherlands used brain scan data and computer algorithms to determine which letters a person was looking at. The breakthrough hinted at the potential for a third party to reconstruct human thoughts at an unprecedented level of detail, including what we see, think, and remember. Such devices, if used en masse by some kind of totalitarian regime or police state, would make life intolerable. It would introduce an Orwellian world in which our “thought crimes” could actually be enforced.
6. Brain Hacking Devices
Relatedly, there’s also the potential for our minds to be altered against our knowledge or consent. Once we have chips in our brain, and assuming we won’t be able to develop effective cognitive firewalls, our minds will be exposed to the Internet and all its evils.
Incredibly, we’ve already taken the first steps toward this goal. Recently, an international team of neuroscientists set up an experiment that allowed participants to engage in brain-to-brain communication over the internet. Sure, it’s exciting, but this tech-enabled telepathy could open a pandora’s box of problems. Perhaps the best — and scariest — treatment of this possibility was portrayed in Ghost in the Shell, in which an artificially intelligent hacker was capable of modifying the memories and intentions of its victims. Now imagine such a thing in the hands of organised crime and paranoid governments.
7. Autonomous Robots Designed to Kill Humans
The potential for autonomous killing machines is a scary one — and perhaps the one item on this list that’s already an issue today.
Here’s what futurist Michael LaTorra told me:
We do not yet have a machine that exhibits general intelligence even close to the human level. But human level intelligence is not required for the operation of autonomous robots with lethal capabilities. Building robotic military vehicles of all sorts is already achievable. Robot tanks, aircraft, ships, submarines, and humanoid-shaped soldiers are possible today. Unlike remote-controlled drones, military robots could identify targets and destroy them without a human giving the final order to shoot. The dangers of such technology should be obvious, but it goes beyond the immediate threat of “friendly fire” incidents in which robots mistakenly kill people from their own side of a conflict, or even innocent civilians.
The greater danger lurks in the international arms race that could be set off if any nation deploys autonomous military robots. After a few cycles of improvement, the race to develop ever more powerful military robots could cross a threshold in which the latest generation of autonomous military robots would be able to outfight any human-controlled military system. And then, either by accident (“Who knew that Artificial Intelligence could emerge spontaneously in a military robot?”) or by design (“We didn’t think hackers could re-program our military robots remotely!”) humankind might find itself crushed into subservience, like the helot slaves of Spartan AI overlords.
8. Weaponised Pathogens
This is another bad one that’s disturbingly topical. As noted by Ray Kurzweil and Bill Joy back in 2005, publishing the genomes of deadly viruses for all the world to see is a recipe for destruction. There’s always the possibility that some idiot or a fanatical group will take this information and either reconstruct the virus from scratch or modify an existing virus to make it even more virulent — and then release it onto the world. It has been estimated, for example, that the engineered Avian Flu could kill half of the world’s humans.
Just as disturbingly, researchers from China combined bird and swine flus to create a mutant airborne virus. The idea, of course, is to know the enemy and develop possible countermeasures before an actual pandemic strikes. But there’s always the danger that the virus could escape from the lab and wreak havoc in human populations. Or that the virus could be weaponised and unleashed. There’s even the scary potential for weaponised genome specific viruses.
It’s time for authorities to start thinking about this grim possibility before something awful happens. As reported in Foreign Policy, ISIS is certainly one group that already appears ready and willing.
9. Virtual Prisons and Punishment
What will jails and punishment be like when people can live for hundreds or thousands of years? And what if prisoners have their minds uploaded? Ethicist Rebecca Roache offers these horrifying scenarios:
The benefits of…radical lifespan enhancement are obvious — but it could also be harnessed to increase the severity of punishments. In cases where a thirty-year life sentence is judged too lenient, convicted criminals could be sentenced to receive a life sentence in conjunction with lifespan enhancement. As a result, life imprisonment could mean several hundred years rather than a few decades. It would, of course, be more expensive for society to support such sentences. However, if lifespan enhancement were widely available, this cost could be offset by the increased contributions of a longer-lived workforce.
…[Uploading] the mind of a convicted criminal and running it a million times faster than normal would enable the uploaded criminal to serve a 1,000 year sentence in eight-and-a-half hours. This would, obviously, be much cheaper for the taxpayer than extending criminals’ lifespans to enable them to serve 1,000 years in real time. Further, the eight-and-a-half hour 1,000-year sentence could be followed by a few hours (or, from the point of view of the criminal, several hundred years) of treatment and rehabilitation. Between sunrise and sunset, then, the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world (if technology facilitates transferring them back to a biological substrate) or, perhaps, to exile in a computer simulated world.
That’s awful! Now, it’s important to note that Roache is not advocating these punishment methods — she’s just doing some foresight. But holy smokes, let’s never EVER turn this into a reality.
10. Hell Engineering
This one’s quite similar to the previous item. Some futurists make the case for paradise engineering — the use of advanced technologies, particularly consciousness uploading and virtual reality, to create a heaven on Earth. But if you can create heaven, you can create hell. It’s a prospect that’s particularly chilling when you consider lifespans of indefinite length, along with the nearly boundless possibilities for psychological and physical anguish.
This is actually one of the worst things I can think of; why anyone would want to develop such a thing is beyond me. It’s yet another reason for banning the development of artificial superintelligence — and the onset of the so-called Roko’s Basilisk problem.