Humans have imagined breathing life into our creations for a long time.
In Greek mythology Pygmalion fell in love with his sculpted Galatea, and Venus brought Galatea to life. Geppetto longed for a son and carved Pinocchio, who likewise became animate. Frankenstein’s monster required an electrical jolt. Walt Disney and his imitators gave life to mice, deer, ducks, roadrunners, coyotes, cars and trucks. Isaac Asimov laid down the law for robots. Hal didn't get the memo.
Now we have artificial intelligence. For some years our phones and computers have been speaking to us. Not till quite recently did anyone think they were alive. The voices sounded flat and mechanical, and their repertoire was limited. Yet we learned to call them by name: Siri. Alexa, Cortana. And sometimes they called us back.
The version of AI that kindled the current buzz, ChatGPT, started as a creature of written text. It seemed clever, if not always accurate. Some people saw sparks of life. Indeed a subculture of AI catastrophism emerged, contending that the bots will soon outsmart and eventually destroy us.
The latest twist, announced last week by OpenAI as ChatGPT 4o, lends the life case new plausibility, in an emotional sense. This version can conduct conversations that might easily be mistaken for conversations with a human. The voices are inflected. They pause, as a human does while searching for a word. They insert fillers like “um" and “you know." They laugh appropriately. They sound like they want to be your friend. Or more: some flirt.
Which raises the question: Do we want them as friends? Do we want an emotional connection with these things we've created?
Pygmalion did. Geppetto did. Walt Disney wanted us to.
But Disney had a different motive. Pygmalion was filling a hole in his own heart. Disney was filling the hearts of others, and filling his pockets in the process. One has to assume that OpenAI is not less attuned to profit than Disney Co.
The voiced bots before now have been agents of information. Siri summoned the weather forecast. Waze directed me around a traffic jam. Google Assistant told me what happened over the weekend without asking me how my weekend was.
I don't want a digital friend. Not at the moment. But I can see how a friendly bot might be appealing to certain people in particular circumstances. Therapists aren't supposed to be friends with their patients, but a friendly manner is appropriate and helpful. Therapists listen with an air of empathy, the better to draw out the patients. The idea is that solutions to problems have to come from within. The therapist is just a facilitator.
A friendly bot could do that. Not perfectly, but well enough for many situations. A teen in crisis might not be able or willing to speak to a human therapist. But if a therapist bot is even half as effective as a human, and ten times as available, the net gain for a troubled demographic would be large, in some cases life-saving.
There’s a precedent for this kind of asymmetric emotional relationship, with roots very deep in human history. Dogs and cats have been hanging out with humans for scores of millennia. What they think of us is hard to say, but we often treat them as friends and members of the family. We talk to them. We imagine they understand, emotionally if not intellectually. We project our love onto them. We feel better with them in our lives.
Historically they worked for us. Dogs herded sheep and cats caught rats. But we're way beyond that now. The emotional connection is far more important to most pet owners than anything mercenary.
Is this the future of AI? Will Siri become a super Lassie—resourceful, brave, faithful, and with the internet at her fingertips?
I'm guessing the botsmiths are betting it is. Considering how much money people spend on their pets, one can only guess what they would be willing to spend on a pet that never needs to be walked or fed. And that can do your homework, file your taxes, and write a toast to give at your brother’s wedding.
Our human attempts to create life in the past didn't always work out. Pinocchio had truth issues. Frankenstein’s monster scared everyone to death. Hal outsmarted Dave. Of course those were just stories. And they were really about humans rather than what the humans created.
For now I prefer my bots robotic. When I need information, I want my bot to speak in bullet points. But this might not always be the case. Harry Truman said, “If you want a friend in Washington, get a dog.” Maybe he'd settle for Siri 10.0.
I agree that the effect that AI creators are ultimately going for is like that of a very smart, helpful and perhaps lovable pet/friend, it’s just hard to imagine their creations succeeding to that point, except for those people so desperate for a companion that they’re willing to be fooled. Dogs have had around 14,000 years of co-evolution with us and breeding by us, to the point that they can read humans like a book. Machine learning is supposed to do the same and they have the entire Internet to ingest, it’s just hard to see the ultimate result coming off as anything believably alive and sincere. And no, I don’t want them to succeed...
“You ain’t seen nothin’ yet” when it comes to AI and the impact it will have on our society. It will be exciting, astonishing, life-changing, and more than likely unmanageable. Strap in for a wild ride.