7 Comments

Quote: reasoning ability is more valuable than the mastery of mere information.

SO TRUE AND WELL SAID!

What really frustrates me in today's social media debate landscape is the lack of analysis behind the deployment of information (or facts, if you will).

I regularly see assertions put forth such as "The Democrats were the party of slavery" (or Jim Crow) as if that fact from 1860 through 1964 is relevant to a discussion of current political trends.

As always enjoying the insights of Mr Brands.

Expand full comment

“Fear not the bot”-ChatBots

After a month since originally reading Prof. Brands’ essay “Fear not the bot,” I have identified only two uses for artificially intelligent chatbots that might be beneficial to mankind: 1) When one can remember and describe a concept but cannot remember the name of the concept, because at a certain age some of the memory’s synaptic gaps are wider than the River Styx such that the neuron-to-neuron signaling process is slower than it used to be, one can describe to the chatbot the concept and ask that it recall the name. 2) When one is trying to write a piece on a subject but is stuck, inflicted with writer’s block, just by typing in a subject on the chatbot’s input line and watching the bot respond with apparent rapid ease, whether the content is accurate or not (and it often is not), seems to liberate the mind, giving a real author the ability to get started with putting ideas to paper, the originality, accuracy and veracity of which real authors attest to with their own reputations on the line each time they write. Perhaps other uses will come to mind later, before playing with the service becomes just a waste of time.

But because the user cannot tell the sources from which chatbot gathers its information, it bestows upon the user a sense of anonymity, much as does any chatroom or other form of “social media” in which the participants are removed from the real people with whom they are communicating. I submit this is a false sense of power, stripping the user of inhibitions, spurring the user to do things that they would otherwise not think of doing. And therein lies its danger. It divorces one’s actions from a sense of one’s accountability. What would Camu say about that?

By analogy, another danger may be looming on the horizon. Launched by the United States Department of Defense in 1973 and given to mankind in 1983 by Executive Order of President Ronald Reagan, the United States Global Positioning System (GPS) has been received and used as a substitute for remembering how to get to places, including home, office, school and grocery store. The National Institute of Health questioned its benefit to mankind when it noted in an article published in 2021 that GPS navigation was “commonplace in everyday life. While it has the capacity to make our lives easier, it is often used to automate functions that were once exclusively performed by our brain. Staying mentally active is key to healthy brain aging. Therefore, is GPS navigation causing more harm than good?” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032695/ (retrieved March 17, 2023). Earlier, Nature, in an April 14, 2020 article, had suggested that “habitual use of GPS negatively impacts spatial memory during self-guided navigation... which critically relies on the hippocampus.” After assessing “... the lifetime GPS experience of 50 regular drivers as well as various facets of spatial memory, including spatial memory strategy use, cognitive mapping, and landmark encoding using virtual navigation task ...,” concluding that “those who used GPS more did not do so because they felt they had a poor sense of direction, suggesting that extensive GPS use led to a decline in spatial memory rather than the other way around.” Louisa Dahmani, Louisa and Bohbot, Véronique D., Nature, “Scientific Reports,” https://www.nature.com/articles/s41598-020-62877-0 (retrieved March 17, 2023).

So maybe my suggestion number 1 above as to the usefulness of using artificially intelligent chatbots to recall names of concepts may be more detrimental than beneficial. And maybe we should all be mindful of the words of the Hippocratic Oath: “first, do no harm.” (Or was quote not from his “Oath,” but from his “Of the Epidemics,” I forget. Possibly, it takes only a month of using chatbots to dull the brain, or at least mine.)

Expand full comment

Should all of these chatbots be required to publish on the first page accessible to the public a Turing Test value, an A.I.Q. (Artificial Intelligence Quotient) if you will, so the user can be forewarned as to level of artificial conversation in which the user is about to engage?

Although perhaps this will eventually prove to be just a novelty, Open AI/Microsoft has now found a way to monetize theirs by charging a subscription fee if one wants to get a better spot in the user queue because of all of the viral demand in their product generated of late. But isn’t all of this like playing chess against a computer? The emotional engagement with a computer is just not the same as with a person.

I note on OpenAI’s website, in Professor Brands’ essay, and in these comments a recognition that, while the output of these chatbots might be prolific, the output is not necessarily entirely accurate and “the system may occasionally generate incorrect or misleading information and produce offensive or biased content” and “is not intended to give advice.” Which presents an interesting question: Can a creation of man that admits to being “artificially intelligent” ever be defamed by a claim that it is not intelligent?

Rather than waste my time in the law books looking for the answer to my question, I’ll just submit it to ChatGPT. Then, when Google catches up to Microsoft and releases its BARD to the market in competition, I’ll submit the same question there and let the chatbots do battle. Observing that contest may produce an emotional engagement comparable to watching a football (soccer) match.

Expand full comment

> Or I will assign writing projects and simply raise the bar for giving grades. I will ask for true originality, and I will demand that students identify and verify their sources.

I think this is a smart approach to adapting education for this new tool in that it not only preserves the value of the assessment, but also teaches students how to use the new technology.

The tech analyst Ben Thompson has also considered how these AI methods could be used to create new approaches to education in his recent article, “AI Homework”, https://stratechery.com/2022/ai-homework/

He suggests that students could be given an assignment to correct an essay generated by AI because these AI programs commonly make factual errors. The AI could even be biased to introduce more errors or create specific types of errors. In this assignment, the student would take the role of an editor rather than an author.

Students would still need to use conventional learning materials like books and lectures to gain enough familiarity with the subject. They could also probe the AI with different prompts and questions to see the contours of its understanding. Further, they could instruct the AI to correct specific factual errors and then regenerate the essay. And if the student completes this work in an assigned AI program, then there is a history of the work they performed, which the teacher can use in grading.

Similar to this article, Thompson also argues that it is counterproductive to resist AI because students may just cheat using AI if given conventional assignments. Further, such generative AI may become a commonplace tool and therefore students should learn familiarity with it early. He gives the comparison to the electronic calculator, which removed much of the drudgery of manually performing math.

Likewise, the rote effort of constructing essays may be supplanted by generative AI that does the low level work of optimizing sentences and paragraphs, while leveraging its massive memory for referencing knowledge. The user instead handles the higher level of tasks of creating the problem statement and they then serve as an editor to correct mistakes.

Thompson cites Noah Smith’s article, “Generative AI: autocomplete for everything” as introducing this sandwich” workflow, https://noahpinion.substack.com/p/generative-ai-autocomplete-for-everything

> What’s common to all of these visions is something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.

Expand full comment
author

One thing is certain: We'll all know a lot more about this subject a year or two from now. Call me an optimist, but I'm excited at the possibilities.

Expand full comment

No doubt the world of Chatbot is with us and even though I am not sure what this all means entirely, it has vast potential.

The great cognitive psychologist, Harvard's Jerome Bruner long ago wrote extensively on the need for educational instruction to "go beyond the information given." This was the title of his early collection of essays and experiments gathered under the subtitle, studies in the psychology of knowing. The premise of so much of his work was humans must be taught through a variety of ways to enlarge their thinking toward more critical levels by using information in creative ways.

Prof. Brands has been reaching toward this level of instruction and he has changed his teaching style to accommodate these changes as our technology has changed.

In my own teaching through the years in the field of foundational studies of education, I tried to take Bruner's ideas seriously. Hence, I moved far from the usual testing of important information contained in any set of readings for that class. I had the students probe topics of interest from the required readings and then write essays or papers showing the conclusions they drew from the information gathered. I particularly looked for the unique insight they could generate. Generally, this worked well, I see too, Prof. Brands in this new world of education, has now abandoned even this approach as well. He is moving towards more oral communication that can tell him what his students have drawn from from their extensive Chatbot discoveries. It will be interesting to see how this will work out, and look forward to more comments on the world of information and how we use it to advance our thinking.

Expand full comment

I've played with chatGPT a bit. Initially, it is quite impressive. I asked it political questions, and ChatGPT (as well as any chatbot) reflect the sentiments of its trainers. It seems to be a proxy for the views of its owner.

I also asked it to write a program in a number of languages. It could do that. The fact that it

understood my specification for a python module and produced a credible (but buggy) results

is something I should expect a chatbot to be trained to do. I am quite impressed with its programming skills.

Excellent article, as usual.

Ed Bradford

Pflugerville, TX

Expand full comment