Some years ago I found myself in an informal debate with Patrick Buchanan. The former communications director for Ronald Reagan and unsuccessful candidate for the Republican nomination for president had recently published a book about the founding of the American republic. So had I, and we were invited to be on the same panel at a history conference in Boston. Somehow the question arose of what would have happened had the American colonies not declared independence in 1776. Would things have turned out better or worse? Buchanan, a stout conservative, contended that things would have been much worse. To make an argument, I took the opposite view.
Buchanan was no slouch as a debater. When he wasn't holding or running for office, he earned his living as a talk show controversialist. On just about any other topic, he would have made mincemeat out of me. But I'm a professional historian, and I simply knew a lot more about history than he did. For every assertion he made, I could readily supply three or four counterexamples. No judges scored the debate. I'm not claiming I beat him. But what I took away from the experience was the observation that having information at your fingertips can sometimes make you look pretty smart.
Researchers in the field of intelligence distinguish between two kinds: fluid intelligence and crystallized intelligence. Fluid intelligence is generalized reasoning ability. Crystallized intelligence is mastery of particular information. Buchanan may or may not have had more fluid intelligence than I did, but I had more crystallized intelligence about the subject we were debating. And it made the difference that day.
Computers and search engines are really good at mimicking crystallized intelligence. A person with a smartphone today effectively knows more about nearly any topic than the most distinguished expert on that topic did a generation ago.
Fluid intelligence is harder to reproduce. Artificial intelligence programs are getting better at approximating it. Chatbots can answer all sorts of questions put to them. But it's difficult to know how much of their responses reflect something close to intelligence and how much simply the command of facts the internet gives them. In my debate with Pat Buchanan, I might have seemed smart to lay folks in the audience, but experts in American history would have considered my performance pedestrian.
In certain respects, the evolution of artificial intelligence appears to parallel the evolution of human intelligence. The average person today is vastly better informed than the average person of ten thousand years ago. Per capita crystallized intelligence has grown by orders of magnitude. But there is not much evidence that fluid intelligence has increased dramatically. We can solve problems that come our way, but probably no better than our distant ancestors could solve the problems they had to deal with. Einstein knitted together space and time, but could he have navigated by stars and currents across the Pacific as the early Polynesians did? Trained chemists draw miracle substances out of a barrel of oil, but is that more remarkable than the work of the ancient alchemists who discovered bronze and steel?
Some intelligence crystallizes in individual brains, but the great leaps forward in what we know have come through culture. Generations passed down what they learned to their children and grandchildren. First this was done orally, through legend and lore, religion and taboo. The invention of writing greatly increased what could be preserved, and the invention of movable type multiplied it many times again. The internet, including crystallized sound and images, builds on this foundation.
Artificial intelligence is the next step. Thinking about AI helps us think about HI (human intelligence). When we speak of human geniuses, we often treat them as self-contained superstars of intellect. But in fact they are thoroughly embedded in the cultural intelligence our species has been crystallizing over millennia. The most ordinary person today, if transported back to the pre-scientific era, would be treated as a wizard for knowing things hidden from everyone else. Newton wasn’t kidding when he wrote, “If I have seen further, it is by standing on the shoulders of giants.”
So instead of thinking of intelligence as an individual attribute, we should think of it collectively, whether we’re considering humans or computers. Just as brains are made up of neurons linked electrochemically in certain configurations, our collective human intelligence consists of all the brains that have ever lived linked culturally. Something similar is true of artificial intelligence. Transistors make up chips, which power computers, which are connected via the internet. When I ask a chatbot a question, it employs the whole array of resources to produce its answer.
Is artificial intelligence actually intelligence? This is the question many still ask, often skeptically. But the same question can be put to human intelligence. Intelligence is a trait best defined by what it does. It answers questions; it solves problems; it detects patterns. It applies insight (fluid intelligence) to information (crystallized intelligence). Machines are getting better and better at this.
Humans have often been reluctant to admit that other species can do what humans do. We long claimed a monopoly on tool-making. And language. And sentience. And culture. We’ve had to dial back these claims as we find other species behaving a lot more like us than our grandparents knew. Many people want to reserve intelligence as a label for what humans do when we think. Otherwise we’re no better than machines.
Maybe we’re not. Indeed, why should we assume we are? Our brains are biochemical machines. They require energy to operate. They accept inputs and produce outputs. So far they are more sophisticated and powerful than computers, but this might not be the case much longer.
Should we feel diminished if our computers become better at solving problems than we are? Not at all. We designed and built them. Their success is our success.
Of course, we should be careful with them. We invented chain saws, which solve our tree-cutting problem, but we handle them with care lest we lose an arm or a leg.
Our grandparents denied that crows could count. Our parents couldn’t imagine that a computer would beat a chess grandmaster. I’m not bothered that they were proved wrong. Our children will take AI for granted, and will deem it simply another part of the collective intelligence our species has been constructing for tens of thousands of years.
Probably the best treatment of AI I have read. On your Pat Buchanan point, I think we hurt undergraduates greatly with this “open book” mentality regarding learning. Just because you can Google anything doesn’t mean you can properly “float” in the history and build theses on a base of it. Sadly, many young PhDs even suffer from this lack of expertise of knowing breadth across important areas.
To AI, I see it as some increasingly complicated “if statements” that one writes in Excel. We would behoove learning more of our subjects similarly to Dr. Brands otherwise it really is “garbage in, garbage out.”