In Herman Melville's day, businesses in cities like New York employed small armies of scriveners, as professional copyists were called. They had to write legibly, uniformly and swiftly. A person who could do that, even if as eccentric as Melville's Bartleby, could make a modest living.
There is no longer any such niche in the work world. Scriveners gave way to typists, who gave way to word processors (when that term denoted humans who entered words on computer keyboards), who disappeared when business executives and others learned to operate their own computers.
Garrison Hall at the University of Texas at Austin, the building in which I have my office, was built in the 1920s. My office and three others are situated off an anteroom where a secretary used to sit. The four faculty members who occupied the offices shared the services of the secretary, who would type up their handwritten manuscripts into typescripts for books and articles. The last such secretary disappeared before my time. All of us now type our own stuff.
When I started teaching history in the late 1970s, I took pains to ensure that my students mastered a certain factual base of historical information. They were examined on this information, and the examinations informed a substantial portion of their semester grade.
Then came computers and smartphones and Google and Wikipedia, until information is ubiquitous and free. I no longer teach information per se; I assume the students can summon it when they need to. Instead, I require them to show what they can do with the information: what arguments they can form, what hypotheses they can test. They would do poorly on an examination I gave in 1976. On the other hand, my students from 1976 would do poorly on the examinations I give today. I consider this a step forward, in that reasoning ability is more valuable than the mastery of mere information.
The latest tool for gathering and deploying information is the chatbot. Since its release last fall, ChatGPT by Open AI has fueled incessant commentary about what it means for the workplace, for schools and for life in general. Schools and universities, including my own, have organized task forces to devise strategies for ensuring that students not use ChatGPT to do their homework or take their tests. An arms race has developed, with counterprogrammers selling software said to be able to detect the difference between an essay produced by AI and one produced by Al (or Barb or Chuck: this orthographical play works better in sans serif).
I'm not worried. If anything, I'm excited. I have long thought that the historical essay of the kind students are taught to write in high school advanced placement classes is an overrated genre. It's vaguely similar to op-ed pieces in newspapers, and has an even weaker resemblance to legal briefs in court cases. What the historical essay tests, primarily, is the ability to write historical essays. I only occasionally require students to write such essays; now that there is a temptation for students to use a chatbot to write them, I'll dispense with them entirely, with no regrets.
I will go in one of two directions. I might simply require less writing and give oral examinations instead. This is a closer approximation to the way most of the world beyond schools works. Members of the managerial class still communicate via email and text, but Zoom and the like are restoring the primacy of oral communications even among this group. And anyone making a case for an important initiative in the business world will be expected to be able to make the case orally. Beyond the business world, in daily life, oral communication has always been the default, and will remain so.
Or I will assign writing projects and simply raise the bar for giving grades. I will ask for true originality, and I will demand that students identify and verify their sources. I might well have to keep raising the bar progressively over time. The chatbots of today are the first generation; they will certainly get better. ChatGPT appears to be programmed to fabricate what it doesn't know. In the very near future, the chatbots will indicate their degree of confidence in the assertions they make. They will include footnotes or the equivalent, allowing the user to track down the sources.
In other words, they will do what students are supposed to be doing today. Naive students today—and all of us start out as naive—don't know truth from falsehood on subjects they are researching. They read various sources, compare them against others, and make their best judgments as to what is true and what is not. A chatbot, suitably programmed, could easily do this.
Some historians have long been receiving similar service from bright research assistants. Judges have law clerks who do the same thing. Junior executives perform a like function for their bosses. The chatbots will level the playing field between those favored groups and the rest of us—in the same way that typewriters leveled the office desk between persons who wrote like engravers and those who scratched like chickens.
Some workers will be displaced, like Bartleby and the secretaries of Garrison Hall. Greater things will be asked of their heirs than was asked of them. But those heirs, being more productive, will be better compensated, and their work will be more interesting.
People long set in their ways will lament the change, as such people always do. Young people will wonder what the old fogeys are complaining about, as young people always do. And the world will move on, as the world always does.
Quote: reasoning ability is more valuable than the mastery of mere information.
SO TRUE AND WELL SAID!
What really frustrates me in today's social media debate landscape is the lack of analysis behind the deployment of information (or facts, if you will).
I regularly see assertions put forth such as "The Democrats were the party of slavery" (or Jim Crow) as if that fact from 1860 through 1964 is relevant to a discussion of current political trends.
As always enjoying the insights of Mr Brands.
“Fear not the bot”-ChatBots
After a month since originally reading Prof. Brands’ essay “Fear not the bot,” I have identified only two uses for artificially intelligent chatbots that might be beneficial to mankind: 1) When one can remember and describe a concept but cannot remember the name of the concept, because at a certain age some of the memory’s synaptic gaps are wider than the River Styx such that the neuron-to-neuron signaling process is slower than it used to be, one can describe to the chatbot the concept and ask that it recall the name. 2) When one is trying to write a piece on a subject but is stuck, inflicted with writer’s block, just by typing in a subject on the chatbot’s input line and watching the bot respond with apparent rapid ease, whether the content is accurate or not (and it often is not), seems to liberate the mind, giving a real author the ability to get started with putting ideas to paper, the originality, accuracy and veracity of which real authors attest to with their own reputations on the line each time they write. Perhaps other uses will come to mind later, before playing with the service becomes just a waste of time.
But because the user cannot tell the sources from which chatbot gathers its information, it bestows upon the user a sense of anonymity, much as does any chatroom or other form of “social media” in which the participants are removed from the real people with whom they are communicating. I submit this is a false sense of power, stripping the user of inhibitions, spurring the user to do things that they would otherwise not think of doing. And therein lies its danger. It divorces one’s actions from a sense of one’s accountability. What would Camu say about that?
By analogy, another danger may be looming on the horizon. Launched by the United States Department of Defense in 1973 and given to mankind in 1983 by Executive Order of President Ronald Reagan, the United States Global Positioning System (GPS) has been received and used as a substitute for remembering how to get to places, including home, office, school and grocery store. The National Institute of Health questioned its benefit to mankind when it noted in an article published in 2021 that GPS navigation was “commonplace in everyday life. While it has the capacity to make our lives easier, it is often used to automate functions that were once exclusively performed by our brain. Staying mentally active is key to healthy brain aging. Therefore, is GPS navigation causing more harm than good?” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032695/ (retrieved March 17, 2023). Earlier, Nature, in an April 14, 2020 article, had suggested that “habitual use of GPS negatively impacts spatial memory during self-guided navigation... which critically relies on the hippocampus.” After assessing “... the lifetime GPS experience of 50 regular drivers as well as various facets of spatial memory, including spatial memory strategy use, cognitive mapping, and landmark encoding using virtual navigation task ...,” concluding that “those who used GPS more did not do so because they felt they had a poor sense of direction, suggesting that extensive GPS use led to a decline in spatial memory rather than the other way around.” Louisa Dahmani, Louisa and Bohbot, Véronique D., Nature, “Scientific Reports,” https://www.nature.com/articles/s41598-020-62877-0 (retrieved March 17, 2023).
So maybe my suggestion number 1 above as to the usefulness of using artificially intelligent chatbots to recall names of concepts may be more detrimental than beneficial. And maybe we should all be mindful of the words of the Hippocratic Oath: “first, do no harm.” (Or was quote not from his “Oath,” but from his “Of the Epidemics,” I forget. Possibly, it takes only a month of using chatbots to dull the brain, or at least mine.)