Discover more from A User's Guide to History
Fear not the bot
One more screwdriver in the tool kit
In Herman Melville's day, businesses in cities like New York employed small armies of scriveners, as professional copyists were called. They had to write legibly, uniformly and swiftly. A person who could do that, even if as eccentric as Melville's Bartleby, could make a modest living.
There is no longer any such niche in the work world. Scriveners gave way to typists, who gave way to word processors (when that term denoted humans who entered words on computer keyboards), who disappeared when business executives and others learned to operate their own computers.
Garrison Hall at the University of Texas at Austin, the building in which I have my office, was built in the 1920s. My office and three others are situated off an anteroom where a secretary used to sit. The four faculty members who occupied the offices shared the services of the secretary, who would type up their handwritten manuscripts into typescripts for books and articles. The last such secretary disappeared before my time. All of us now type our own stuff.
When I started teaching history in the late 1970s, I took pains to ensure that my students mastered a certain factual base of historical information. They were examined on this information, and the examinations informed a substantial portion of their semester grade.
Then came computers and smartphones and Google and Wikipedia, until information is ubiquitous and free. I no longer teach information per se; I assume the students can summon it when they need to. Instead, I require them to show what they can do with the information: what arguments they can form, what hypotheses they can test. They would do poorly on an examination I gave in 1976. On the other hand, my students from 1976 would do poorly on the examinations I give today. I consider this a step forward, in that reasoning ability is more valuable than the mastery of mere information.
The latest tool for gathering and deploying information is the chatbot. Since its release last fall, ChatGPT by Open AI has fueled incessant commentary about what it means for the workplace, for schools and for life in general. Schools and universities, including my own, have organized task forces to devise strategies for ensuring that students not use ChatGPT to do their homework or take their tests. An arms race has developed, with counterprogrammers selling software said to be able to detect the difference between an essay produced by AI and one produced by Al (or Barb or Chuck: this orthographical play works better in sans serif).
I'm not worried. If anything, I'm excited. I have long thought that the historical essay of the kind students are taught to write in high school advanced placement classes is an overrated genre. It's vaguely similar to op-ed pieces in newspapers, and has an even weaker resemblance to legal briefs in court cases. What the historical essay tests, primarily, is the ability to write historical essays. I only occasionally require students to write such essays; now that there is a temptation for students to use a chatbot to write them, I'll dispense with them entirely, with no regrets.
I will go in one of two directions. I might simply require less writing and give oral examinations instead. This is a closer approximation to the way most of the world beyond schools works. Members of the managerial class still communicate via email and text, but Zoom and the like are restoring the primacy of oral communications even among this group. And anyone making a case for an important initiative in the business world will be expected to be able to make the case orally. Beyond the business world, in daily life, oral communication has always been the default, and will remain so.
Or I will assign writing projects and simply raise the bar for giving grades. I will ask for true originality, and I will demand that students identify and verify their sources. I might well have to keep raising the bar progressively over time. The chatbots of today are the first generation; they will certainly get better. ChatGPT appears to be programmed to fabricate what it doesn't know. In the very near future, the chatbots will indicate their degree of confidence in the assertions they make. They will include footnotes or the equivalent, allowing the user to track down the sources.
In other words, they will do what students are supposed to be doing today. Naive students today—and all of us start out as naive—don't know truth from falsehood on subjects they are researching. They read various sources, compare them against others, and make their best judgments as to what is true and what is not. A chatbot, suitably programmed, could easily do this.
Some historians have long been receiving similar service from bright research assistants. Judges have law clerks who do the same thing. Junior executives perform a like function for their bosses. The chatbots will level the playing field between those favored groups and the rest of us—in the same way that typewriters leveled the office desk between persons who wrote like engravers and those who scratched like chickens.
Some workers will be displaced, like Bartleby and the secretaries of Garrison Hall. Greater things will be asked of their heirs than was asked of them. But those heirs, being more productive, will be better compensated, and their work will be more interesting.
People long set in their ways will lament the change, as such people always do. Young people will wonder what the old fogeys are complaining about, as young people always do. And the world will move on, as the world always does.