There are few stories that amused my engineering colleagues and me more than the recent report that our end would come not at the hands of Skynet, a group of “defense network computers, new, powerful, hooked into everything, trusted to run it all,” but at the hands of Facebook, the social networking site launched from a Harvard dorm room. The first reports were ominous and entertaining:
It finally happened. An Artificial Intelligence (chatbots in this case) was on the verge of seizing control of everything, but thank God someone had the presence of mind to shut it all down before Bob and Alice (the bots) launched emojis at a billion Facebook users, creating an international incident. The best part of the story? The conversation Bob and Alice were engaged in having been taught to barter, then having invented their own “bizarre form of communication.”
Bob: i can i i everything else ……….
Alice: balls have zero to me to me to me to me to me to me to me to me to me
Bob: you i everything else ……….
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else ……….
Alice: balls have a ball to me to me to me to me to me to
Bob: i ….
Alice: what to do with humans?
Okay, I added the last line, but you get the point. To say we were amused by Bob and Alice’s conversation, not to mention the tone of the article, would be an understatement. Personally, I thought it was a publicity stunt from Facebook. (“You’re buying Whole Foods, Amazon? We have Skynet. Checkmate.”) No surprise that the web was immediately awash with stories refuting the claim, which may have set a record for Snopes. (Verdict? False.)
As entertaining as the initial reporting was, it also underscored our fascination with tales of global apocalypse and our propensity to extrapolate future outcomes by oversimplifying the behavior of almost everything, natural or man-made. For example, we need only liken the human brain to a computer and it’s a short step to convince ourselves that our robot overlords will take our jobs (Kevin Drum, 2013). (Interesting that our robots will definitely take our jobs, but probably won’t kill us. I’m with The Terminator franchise on this one. I don’t think our robots will settle for our jobs any more than Henry VIII settled for an annulment.)
Oversimplify the behavior of something and you can come up with the runaway theory of anything. Whether it’s runaway population growth at the end of the 18th Century (Malthus, 1798) or the runaway depletion of oil reserves in the 1970s or runaway global warming and artificial intelligence today, propose a runway (or hockey stick) theory of something and it’s a straightforward exercise to posit catastrophe on a global scale, rather than what we have in reality experienced—trade-offs between costs and benefits, amidst uncertainty, over time.
Personally, I’d be more concerned about a runaway comet. Not much to extrapolate there. Just, “BANG!” Maybe we can get Bob and Alice to work that one. (“Comets have a comet to me to me to me ….”)