Recreating Creative Writing (part 5 of way too many)

What happens to Creative Writing in a ChatGPT world?

The meaning of both “writing” and “creative” may have just changed.

The word “writing” used to mean something like: “selecting and assembling words into sentences.”

Daniel Dennett (a philosopher and cognitive scientist) was interviewed in the New York Times last week. He described writing this way:

“When you are choosing the words that come out of your mouth, slight subliminal differences in the emotional tone of one word over another, that’s what’s going to decide which word you use.”

The day before I came across an essay by Alan Knowles about LLMs (large language models) like ChatGPT:

“they use a statistical model to predict the probability of tokens (output) occurring after a given sequence of tokens (input). In other words, after an LLM sees some words, it predicts which words will come next based on patterns learned in its initial training.”

In our suddenly LLM-world, “writing” no longer (necessarily) involves “selecting and arranging words.” Software can do that now. The human part of writing is now focused on crafting questions (to ask the software), revising (the output), and endorsing (the words become your words when you accept them as your words).

Asking a sequence of guiding questions and choosing the best results — that used to be called “editing,” specifically “developmental editing,” what a psychotically hands-on editor might do with a psychotically malleable author’s most impossibly primordial work-in-progress. Though in this case, the author is ego-free, literally brainless, and the editor is a tyrant. It’s as if Shakespeare had an army of monkeys battering away at typewriters, and he just had to read over all of their shoulders, waiting for Hamlet to rattle out. Only this monkey army has been trained (the “P” in GPT stands for “pre-trained”) and so doesn’t require an infinite number of attempts.

Or, better, think of a director working with actors improvising lines and scenes that eventually get finalized into a script. The actors are coming up with the lines, but only as shaped and nudged and assembled by the director. The actors just keep trying different things, until the director sees something that clicks.

That’s how I felt when I coaxed a short story out of ChatGPT earlier this summer that story is here, and my earlier attempts are here, here, and here). It was fun. It was also not devoid of creativity, but I’m really not sure the process (or the results) should be called “creative writing.”

Either way, the revising stage has become significantly more central. Keep this key difference in mind: ChatGPT does not experience subliminal differences in emotional tones, and you don’t run statistical models that predict the probability of words occurring.

The word for “probable words” is “cliché.” Writers, especially of fiction and poetry, eliminate them. ChatGPT is designed to produce them.

The word for “subliminal differences in emotional tones” is “connotations.” Writers, especially of fiction and poetry, weigh them carefully. ChatGPT doesn’t know they exist.

To be fair, ChatGPT doesn’t know that anything exists. But its probable word arrangements can seem uncanny at times, producing meanings (when read by actual readers) that are improbably accurate.

That particular kind of accuracy doesn’t matter as much in fiction writing. While Chat is notorious for “hallucinating” textual evidence when producing literary analysis, fiction writing is entirely hallucination. You’re required to make things up.

You’re also required to produce connotatively rich, cliché-free prose.

Or rather, that’s a new requirement for creative writing courses in a LLM-world. You now have to write better than ChatGPT. Rather than providing an undetectable method for cheating, the software just raised the bar above its own level of mediocre prose.

I see lots of “process sentences” and “placeholders” in first drafts (my students’, my own, other professional writers’). They’re a kind of “note to self” about facts you need to keep straight and intended effects that you can’t achieve on a first round.

ChatGPT produces mostly placeholders, most of them dull generalizations. The “writer’s” job is to make those sentences better, and so in that process make them their own.

I call that “endorsing,” which may seem like a new final step for writing, but anyone who has collaborated with a (presumably human) co-author has probably experienced it.

When I “write” with Nathaniel Goldberg (we just turned in our third book together last month), we each draft text and share it with the other. Often we make multiple changes, bouncing material back and forth until we both have trouble remembering who initiated the original version of a given sentence or paragraph or subsection. Sometimes we make no changes, accepting something that the other drafted as now also our own. That’s the moment when we become an author of those words. It doesn’t matter that we didn’t draft them.

ChatGPT is similar, with one key difference: it’s not a co-author. It can’t be. It can’t read. It’s sorting word objects not word meanings.

Still, ChatGPT is fulfilling the function of a co-author, which disrupts the traditional meaning of “writing.”

It might also mean the death of writer’s block. The blank page may be gone forever. First drafts are now effortless, and so the effort of writing leaps to revision.

Which creative writing classes were always about anyway.

 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2023 03:30
No comments have been added yet.


Chris Gavaler's Blog

Chris Gavaler
Chris Gavaler isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Chris Gavaler's blog with rss.