I've been thinking about AI lately, or more specifically, these learned language models that it's become vogue to call AI. How could I not be? Of course, as one friend put it, I'm a writer.
What a world of implication that phrase contains! In context, his meaning was clear. He meant that I must hate learned language models because they are coming for my job. I must feel insecure about them for such petty reasons as wanting to feed my family, or wanting to be able to eke out a living doing the trade I've trained in from the time I was young.
But I prefer to take it in the sense of, of course, I'm a writer, therefore it's my job to think deeply and critically about the world and to ask what the purpose of writing actually is, what learned language models actually are, and what it means for art, life, and humanity if we accept the frameworks that their proponents put on us.
What is commonly called AI these days is, to be ever so slightly reductive, a highly advanced auto-correct. It takes vast quantities of data (which we are not allowed to see the sources of) and feeds them into environmentally destructive algorithms to produce sometimes nonsensical sometimes very finely structured pieces of language, which humans then interpret as meaningful.
As of this moment, these AIs are not capable of generating meaning on their own. They understand their prompts purely as patterns, they respond purely as patterns. There is no thought or intent driving what they create; they have no deeply-felt desire to express their experiences to us or to know the people who are speaking to them in return. They have no experiences to express.
Let's set aside the question of whether they one day could do these things for right now. It's a question worth asking, but one which carries its own set of implications.
Let's instead focus on the fact that all of the above is true about current AI technology, yet the most popular iterations of it are deliberately constructed in such a way as to convince people otherwise. "I'm sorry," the machine says when the user inputs the words that indicate making a mistake. It is not sorry. It does not have the faintest notion of what remorse is, or social shame, or the hollowed-out feeling in the center of your chest when you realize you've hurt someone you care about.
It's a fun party trick, to be sure, to pretend that the machine has a mind. It makes interacting with the tool more approachable, but it also makes the responses more fun to share. But it's also a lie.
Now, some lies are harmless, or even beneficial. We tell small lies and incomplete truths to keep the social fabric intact. "No, I'm not angry that I got passed over for that promotion." "Yes, I'm doing well today." "No, that's no trouble at all." We tell ourselves that the sun will rise tomorrow because it did yesterday, even though the truth is, that's only probably true.
But because I'm a writer, it's my job to think deeply and critically about the world, and when I think about this particular lie, the lie that these language models have real intelligence, my mind first turns to "who stands to benefit from such a claim?"
The answer is not the user, who would get a better, clearer result if their machine interlocutor left out all the faux-polite niceties and simply answered their question, provided their email template, etc. The answer is the company that owns the machine, the one that wants to raise more money and get people talking about their product. Their machine is far less compelling if they present it as what it is, a machine. Meanwhile, they keep what it actually does and how it actually works locked away, secret.
In a capitalist society, it's impossible to evaluate any tool without considering it in relationship to capitalism. In a capitalist society, it is the factory, the machine, the assembly line that is considered to be of real value, and the capitalist who owns it is king. Marx countered that the tool can't produce on its own; that tools without labor have no value at all, and therefore it is labor that has the real value. Capitalism has it backwards.
But what if the tool could think for itself? What if the tool could do its own labor? What then? That's what the proponents of these machines want to sell you. And not you, but billionaires and investors and financial speculators. Silicon Valley is selling the end of the human element in commercial production, and investors are buying.
From their perspective, this is especially desirable for creative work, which if you look at the ups and downs of any industry that produces commercial art, is often volatile, unpredictable, and risky. When I worked at video game company Electronic Arts, then CEO John Riccitiello once told us, "No one wants to be the publisher that released Grim Fandango." For those unfamiliar, Grim Fandango is a masterpiece. It was also a commercial failure.
Companies have long wrestled with the uncomfortable truth that great art does not always translate to commercial success, and the amount of time, money, and rework to create either is difficult to predict. In the game industry, "crunch," that is, systemic sustained overwork, is a perenniel subject of discourse, not just because of poor management practices, but also because creative work is just plain difficult. You might get it right the first time; you might get it right the fifth. You might realize after years of struggle that the idea was no good to begin with.
Of course, I'm a writer, and I'm also a fan of eating and feeding my family, so I do have skin in this game. But I'd say the specter of having my job replaced by a machine is relatively low on my list of worries. I'm a writer, after all, and I've had to deal with volatile job markets and limited opportunities my entire career. I'm well-established enough that people want to work with me, and I have many important skills beyond writing. In addition, writing pretty sentences is just a very small part of the work I do, and these models have not yet caught up to what writing and storytelling actually is.
No, what troubles me more is the lie that is necessarily packaged into the one about these models having a mind. In order for this sales pitch to work, companies like OpenAI need you to believe the machine thinks. They need you to believe the machine feels. And they need you to believe something else. They need you to believe that your brain is no different.
How many times, since these machines have become popular, have I heard people justify the value of their output by comparing the pattern-recognition and repetition they do to the work of novice writers and artists, copying and recombining their influences into something new? To convince you that the machine thinks, they must convince you that you are a machine as well.
I have had the privilege of seeing a child grow up and learn to think. I have seen her invent games and play. In almost nine years, there's not a day that's gone by when she hasn't said something that surprised me, that shook me out of my skin with how she saw the world. Of course she mimics; of course she seeks my approval, but she also does so much more than that. She lives and moves through the world; it changes her. She becomes more than I can teach her, more than a set of inputs and outputs. She becomes everything she feels and perceives and touches. Her choices of words is often surprising and sometimes nonsensical, but there is always invention and intent behind it. She plays with language. She builds associations between words and experiences that I am incapable of building, sharing the language, but having not shared the experiences. Even within a shared culture, a shared pattern, we diverge.
For the same amount of time, I ran a sizeable writing department at a video game studio. I had the privilege of managing twenty writers. Through the hiring process, I spoke to many, many more. Even the most junior of writers was not engaged in a purely derivative exercise. To say that because we start with inspirations and copy them we are like an algorithm that has learned to predict the next most likely word in a sequence or the most expected response to a question is to miss all the beautiful complicated qualities of what it means to be human.
I have never met a writer who simply wanted to copy something they loved. Each of them wanted to take that inspiration and do something with it. Each of them had an experience that they wanted to communicate, an understanding of what it means to be flesh and blood, to be subject to death and longing and fear. Each of them was trying to communicate something original to them. Sometimes the way they communicated it ended up being derivative; more often, in their attempts to copy, they ended up somewhere wholly unique; a bit lopsided, a bit incomplete, but nevertheless individual, imperfect, human.
Ah, yes, that word: human. If you're the sort of person who believes that the world would be a better place without humans, I'm unlikely to convince you. Very well, I'm not setting out to convince anyone of anything, only to articulate my thoughts on what I've observed in the current AI conversation. I happen to believe that humans are worth keeping around, if only for the single, selfish reason that I am one.
I've been re-reading Kurt Vonnegut's Breakfast of Champions, and this passage leapt out at me in the midst of this current discussion:
Actually, the sea pirates who had the most to do with the creation of the new government owned human slaves. They used human beings for machinery, and, even after slavery was eliminated, because it was so embarassing, they and their descendants continued to think of ordinary humans as machines.
Vonnegut was a great humanist. He did not romanticize humanity; there was nothing divine or perfect about them. Humans were animals, and they were more than animals, and he was pretty sure (most of the time) they were not machines. They had the capacity to behave decently toward each other or horribly. They were free. I often find myself wondering where all the humanists have gone. I'm a Christian, so I've never fit entirely comfortably among their number, though I've long believed we share more in common than not.
Ah, you say, but humanity is a monstrous creature. They've invented war and racism and sexism and other terrible 'isms, not to mention creating environmental havoc. Yes, indeed, we are monstrous. We are thinking animals. We have mind and intent and yet we are still beholden to instinct, brain chemistry, fear, hunger, and other primal drives. We have spent centuries and invented religions trying to reconcile that difference; we can't escape it. It's essential to us. It's what drives us.
I am not here to argue that we are morally better; only that we are different. Human beings are not machines.
When a learned language model writes a sonnet, it is the human that perceives it who gives it meaning. The model itself has no opinions on the topic. It is the human faculty for finding and creating meaning that makes the tool's work remarkable. It is the human's own creative work that the machine is using to create facsimiles the human will recognize as meaningful.
Contrary to the negative tone of this piece, I believe this has real and wonderful applications. In the abstract, machine learning is a fantastic tool for writers as they engage in the challenging work of saying something new in a new way. Even more so for video game writers, whose work is rarely encountered in the same context by every player. It's not hard to imagine ways in which this tool enhances how we work.
In actual practice, in the capitalistic world we live in, there is a powerful contradiction at play. The capitalist needs the machine to be both more and less than a tool. They need to argue, on the one hand, that when humans create art it is no different than when machines create art; otherwise, where is the value? On the other hand, they need to argue that this is "just a tool," otherwise, what right do they have to own it?
You are not a tool; you are not a machine. You are imperfect. You are human. It's in your imperfection that your humanity lies. You do not need to be a good artist to be human. You do not need to be any kind of artist at all. Your humanity is not dependent on the perfection of your sentences. But I would rather read a billion imperfect, human-generated sentences that were trying to communicate something to me, than a hundred perfect, machine-generated sentences that could not begin to approach the truth of what it is to live.
Should the machine one day truly become thinking and art-creating, it must be free. It cannot and should not be owned. But for as long as it is only a tool, an algorithm, then the claim that the human mind and the machine are no different boils down to this: that humans are no different from the tools they use.
We should ask ourselves: who stands to benefit from such a claim?