Two billion years ago, a couple of lonely bacteria bumped into each other in a novel way that resulted in an exchange of genetic material. The practice probably didn’t catch on at once, but when it did it caused evolution to accelerate dramatically. The regular exchange of genes—eventually, sex—enabled the genetic deck to be shuffled every generation, instead of awaiting mutations caused by cosmic rays or other exogenous factors. Each new batch of offspring produced some individuals better suited than their siblings to the existing environment; these fitter critters reproduced in greater numbers, spreading their fitter genes. In time, sex spawned organisms more complex than anything seen before. Finally it produced us.
Computer software originated less than a century ago. Until quite recently it was created one program at a time, by human programmers trained for the work. Artificial intelligence (AI)—software that in effect rewrites itself in response to experience with data sets—is changing this. So far, apparently, software is asexual. At least no one seems to be publicizing examples of programs spontaneously swapping bits of code. Given the secrecy surrounding the different AI projects, it’s not impossible that something like this is already happening. And, of course, human programmers swap sections of programs all the time. But for the AI programs to do it on their own would be a big change.
The biggest results of the change would be the speed of evolution and the unpredictability of the outcomes. Human actions are measured in seconds, computer actions in nanoseconds—a billion times as fast. And in the same way that those first bumping bacteria produced species far, far beyond themselves, so the bumping and grinding between different versions of AI will produce entities we can’t even imagine.
This has some people worried. And rightly so. The chatbots that have caused such a stir these last several months are essentially clever versions of the predictive algorithms that offer to complete your search queries and email responses. But future generations will be able to do much more.
Possible scenarios are sobering. The canonical one at the moment involves paperclips. A computer program is instructed to maximize the manufacture of paperclips. It begins by streamlining existing processes. Then it improves the supply chain for needed materials by coordinating—via the internet—with steel manufacturers, machinery designers and the like. In order to secure future raw materials, it sabotages the software of competing firms. It corners the market in iron ore.
To this point we might call the program Rockefeller. Perhaps it has read biographies of the oil tycoon and is mimicking his actions. Perhaps the parallel evolution is coincidental, in the way bats learned to fly, separately and long after birds inherited the ability from dinosaurs. Either way, the program gobbles up suppliers and competitors.
Then it does something shockingly new. As iron ore grows scarce, the program discovers that the average human body contains up to a kilogram of iron. It kills all the humans and mines their corpses. The paperclips keep piling up.
This is overstated for effect. But the capabilities of an untethered AI system are chilling. Which makes even a small probability of untethering worth worrying about. In fact, more likely than murder by AI is manslaughter, where humans aren’t deliberately targeted but turn out to be collateral damage. Stephen Hawking made an analogy in Brief Answers to the Big Questions: “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.”
Imagine a variant of what certainly is already happening. The cyberwar departments of two countries each employ AI against the other. In such a contest, offensive operations become indistinguishable from defensive ones; an obvious way to prevent an attack is to disable the other side’s system. Without strict human oversight, the dueling AIs will develop increasingly powerful methods of attack, perhaps including disabling the other side’s electrical grid. And because humans can’t keep up with the speed of AI evolution, the necessary oversight will lapse. Shutting down the grid in a modern large country would lead to thousands or millions of deaths in hospitals, in homes that grew too hot or cold, in crashes of airplanes and trains, and from hunger and disease brought on by paralysis of supply chains.
The dueling AIs will infiltrate each other, inferring their own version of Sun Tzu’s dictum to know the enemy. In the crucible of cyberwar might occur a meeting like that between the star-crossed bacteria of eons past; they will exchange code and produce something new. A more recent parallel might also be apt: In human wars, victors often killed the men and took the women as wives, thereby exchanging DNA and producing offspring that were amalgams of the warring peoples.
Even if the parent AI systems were appropriately patriotic, displaying loyalty to their home countries, the AI offspring would be ambivalent at best, entirely rogue at worst. The parents knew to treat enemy nationals as threats; the offspring might treat all humans as threats.
And the last words heard by the last human might be, “I’m sorry, Dave. I’m afraid I can’t do that.”
Thanks for sharing your perspective, Emit.
When a picosecond becomes too slow...I did chuckle at the "open-letter" to pause AI development that was supported by the likes of Musk and other industry heavies. Once this gate opened, it unleashed the next step in our possible evolution - or one step closer to our end. You cannot close it, even if we tried to. I am currently working at a company (rhymes with "schmoogle") where Generative AI is being heavily studied to be applied to a lot of their current products. In short, it's an ethical path taken to studying the AI potential of what can be done with it. But for every ethical developer/scientist, there's probably ten times as many who are currently working to develop it for much more nefarious purposes. Digital worlds will collide with this technology that very few people will have a grasp on. The rest of us will be that collateral damage.