Does a computer have free will? Most people would say no.
Does a computer program have free will? Most people say no, and programmers definitely hope not. Computers and their programs are supposed to do as they are told. They’re not supposed to freelance. A rogue computer is the nightmare of science fiction.
Moreover, free will implies sentience. There’s no evidence yet that computers and their software have achieved sentience. There are practical and philosophical reasons to think they never will.
Yet consider large language models. Roughly speaking, LLMs solicit an input, typically a string of text in the form of a question or instruction. They consult their training material (the large amount of text they've used to formulate the algorithm at the heart of an LLM) and generate the word most likely to have followed the input text string. They append that word to the input string, and then run the algorithm on the slightly enlarged string to get the next word. They add that word to the string and repeat. And so on.
Free will doesn’t figure in any of this. In fact, if you start with the same input string and apply the same model, you’ll get the same output.
Unless the LLM includes a kind of wild card function, as most do. When this function is turned on, it injects a certain randomness to the operation of the LLM. It might be as simple as saying that after a certain string, Word X should follow 80 percent of the time and Word Y 20 percent. Computers have random number generators that can make something happen 80 percent of the time and something else 20 percent.
The randomness might be triggered by an aspect of the prompt. If the answer is supposed to mimic the speech patterns of Winston Churchill, the LLM will know that Churchill used an adverb with the main verb in a sentence 10 percent of the time, for instance, and dispensed with adverbs 90 percent. Or if the prompt involves weather, the LLM knows that Minneapolis receives snow on 15 percent of January days. You get the idea.
Again, no free will. But less predictability than before. And less apparent determinism.
Now consider a human life. At conception an embryo’s inputs are exclusively genetic: the DNA from the mother and father. But new inputs start accumulating at once. The state of the mother’s health while she’s carrying the fetus becomes part of its history. At birth the inputs accelerate, and they continue to multiply as the child grows into an adult. They don't stop until the person dies.
As part of the maturation process, the child begins to manifest what looks like free will. The child likes applesauce but hates peas. The child is agreeable one day and cranky the next. The child learns to say no, and eventually to lie.
But is this really free will? Is it any freer than the output of the LLM? Is the child, and then the adult, on any given day anything more than the sum of all the previous days back to conception? Randomness, which was programmed into the LLM, is provided by the environment of any human. And even if the randomness of the world is only apparent, the underlying determinism is so complex as to render it inaccessible to any individual intelligence. (In the same way, computer-generated randomness is actually only apparent, but the patterns are so complex as to defy predictability.)
Those who believe in a divinely inspired soul would say that of course human will is freer than the output of an LLM. Nonbelievers might be less confident in answering. Persons with a taste for the philosophical principle of Occam’s Razor—don’t complicate explanations unnecessarily—might observe that because the LLM model of human development doesn’t require a soul as part of the explanation, the concept of soul can be dispensed with. And free will with it.
When discussion of a natural phenomenon reaches a fork between between belief and nonbelief, there's not much more to be said. And when such a discussion hinges on taste—well, there's no accounting for taste.
But you could program it into the LLM. With a twist of randomness, of course.
"A rogue computer is the nightmare of science fiction." Two examples: In the movie _Westworld_, the robots eventually become sentient and turn on humans. The same thing occurs in Ambrose Bierce's science fiction story "Moxon's Master."