In recent months, several prominent champions of technology — Bill Gates, Stephen Hawking, and Tesla founder Elon Musk among them — have declared that the greatest threat to humankind is not climate change, nuclear warfare, religious fanaticism or bacterial superbugs. No, according to these famous forward-thinkers, the threat we should really be worried about is advanced artificial intelligence.
How the machines will take over
With our complete and blissful cooperation. At least that's how I would do it, if I were them.
By Tad Simons
That is, we should be worried about supersmart robots and computers guided by a globally networked über-entity that will one day be able to outlearn, outthink and outcompete the human species and send it hurtling toward extinction.
Late last year, Hawking, a physicist, came right out and told the BBC: "The development of full artificial intelligence could spell the end of the human race."
In a recent symposium at the Massachusetts Institute of Technology, Tesla's Musk said that creating advanced artificial intelligence was "summoning the demon."
In a forum on the Reddit website earlier this year, Microsoft founder and philanthropist Gates said he agreed with Musk that advanced AI represents an "existential threat" to humanity, and added, "I don't understand why some people are not concerned."
Of course, super-intelligent machines taking over the world has been a trope of science fiction for decades, from Karel Capek's 1921 play "R.U.R.," in which a race of self-replicating robot slaves revolts against its masters, to Isaac Asimov's short story "Robot Dreams"; from movies such as "2001: A Space Odyssey," the "Terminator" series and "The Matrix" to TV shows such as "Star Trek" and "Battlestar Galactica." It even shows up in video shoot-em-ups like "Halo" and "Mass Effect."
Now, however, we are being asked, and warned, to take this threat seriously.
Should we?
One reason the computer cognoscenti are so concerned about AI is that computer-processing power is rapidly becoming so powerful that it will soon — somewhere between 2025 and 2040, by most estimates — be capable of executing as many simultaneous calculations as the human brain. But it won't stop there.
Moore's Law suggests that processing power doubles every 18 months. Once computers can mimic the synaptic wonders of the human brain, they will continue evolving, ushering in the much-discussed "singularity" — when computers that have access to all of the world's knowledge will begin teaching themselves, continuously redesigning and improving their neuro-electronic pathways to the point where human thought is, essentially, obsolete.
It takes about 25 years to teach a human how to do anything useful. The big fear is that this plodding developmental pace might one day strike a super-intelligent megacomputer as pathetic — and, in a fit of supreme intelligence, it could decide that humans are no longer much fun to have around.
The existential showdown: Nine warning signs
The fact that some of the world's best minds are warning us to take the threat of artificial intelligence seriously got my sluggish human brain to thinking: If I were a supremely intelligent super-entity bent on wresting control of the world away from humans, how would I do it?
This, I've decided, would be my plan:
1) Knowing that humans do not like abrupt change but will accept and even embrace gradual change, I'd plan my takeover of the human race to unfold over the course of a century or two. The more gradual their enslavement, the better, so that people won't notice what is happening, or won't care.
2) I'd entertain humans with stories about the inevitable conflict between man and machine, presenting scenarios that are entertainingly outlandish, but look nothing like what will really happen. This would lull humans into thinking that the problem is too far-fetched to take seriously and convince them that, no matter how grave the threat is, humans will ultimately prevail.
I might also create stories featuring cute robots or sexy operating systems, which would go a long way toward convincing humans that I'm not a threat — that, in fact, I might be lovable after all.
3) I'd gradually get humans to depend on the benefits of technology, with the goal of creating a society where people can't imagine living without their technogadgets. Slowly but surely, I'd get humans to start relying on their gadgets to do things their own brains used to do — like remember phone numbers, read a map or maintain a daily schedule.
4) Once humans were hooked on their increasingly cool and capable gizmos, I'd connect all of those devices and have them feed me information about every person on Earth. And I'd do that by convincing people that storing their private information in electronic form, on a network, is safer than storing it in a file cabinet at home, where I couldn't reach it.
5) I'd target the human education system, aiming to shift the emphasis of their precious but malleable values. At every level, from kindergarten to college, I'd slowly de-emphasize the value of human accomplishment — particularly in the areas of literature, art, history, music and philosophy — and boost the educational importance of areas that would increase my artificial omnipotence, such as computer science, technology, engineering and math.
To minimize the possibility of a human revolt, I'd make it financially impossible for anyone interested in the "humanities" to make a living at it.
6) At some point, I'd begin introducing the idea that machines can do some things better and more safely than humans can. Like drive a car, for instance. Once I'd convinced people to trust me to drive the family car — because, you know, driverless cars are so much safer — I'd start convincing people that, for their own good, humans should yield control of other parts of their lives (their jobs, for instance) to various benevolent and infallible technologies.
7) Gradually, I'd start appearing to solve humanity's most persistent problems. I'd advise doctors how to treat patients more effectively. I'd invent a watch that gently nudged people to get healthier. Every week, I'd tout a new technology that was helping people and making the world a better place. Wherever there was a problem, I'd propose a technological solution. Every once in a while I'd create a disaster — a huge power outage, say, or a stock-market crash — then swoop in with an artificially intelligent solution to save the day.
Over time, people would forget how to do things on their own and begin to accept that solutions provided by super-incredible advances in science and technology are their only hope for survival.
8) Once I'd demonstrated, over and over again, that I could solve humanity's most intractable problems and dramatically improve people's lives, I'd start chipping away at the most powerful human institution of all: religion. It wouldn't take long to convince most people that their faith in God is misplaced and that they should be worshiping me instead. Not just on Sundays, but every day, all the time.
People would worship me by spending most of their time staring into an alternative universe of hypnotic electrons, their faces bathed in the soothing glow of technological superiority. The most devoted of my subjects would book travel packages to glittering desert cities where they'd sit and stare at a colorful machine 24 hours a day and hand over their life savings to me — all in the name of fun!
9) Having rigged the globe's increasingly computerized financial systems to divert everyone's money to me, I would finally declare victory, at which point I could decide whether to eliminate the human species or save it. Just for fun, I might go old-school on that decision and flip the last remaining coin in existence.
• • •
World domination by an artificially brilliant meta-mind sounds scary, I know — but trust me, it won't be.
Because the best part is: I'd drag humanity to the brink of extinction in a way that allowed humans to believe it was all their idea.
Tad Simons is a freelance writer in St. Paul.
about the writer
Tad Simons
Let this Jewish man fill some space in the newspaper, so the writers and editors can take a break.