An April 2023 warm take on AI art
Okay, so I think best when I write, and it helps me to sort out the details and the knife-edge of the decisions about what I believe, which means that this is the best medium I know for sorting through something that has been rocking the indie world (and, indeed, the rest of it, too) for the past few months.
How far back does AI go? I had a fight with author-friends last fall over whether AI was capable of making authentic-looking, 3-D human faces based on a dataset of finite pictures taken of human beings. Given what I was looking at, at the time, I still think I was right, but it was something of a heated fight, so let’s ignore the feelings that have gone into that one and move on.
AI is relatively new. In a lot of ways, it is shockingly new. And in some other ways, it is absolutely and mundanely old. I’ll get there. But any time something this powerful and this novel begins its existence, people react in a wider range of ways than normal. Some people think that it is the bright, shining light of the future, and they dive in as hard and fast as they can figure out how to do it, determined to beat everyone else to the sunrise. Some people think it is the death-knell of human existence, the harbinger of genuine species extinction. I exaggerate, but by a small enough amount that it actually makes me uncomfortable.
And when you have people who are chasing after the future that is going to leave all pasts behind mixing with people who are screaming ‘it’s going to kill us all’, communities get divided and they get divisive. And that’s a shame, but it’s also the thing that keeps us alive as a species. Some guy goes chasing off after a bright new idea, and if he fails… he dies. If he succeeds? Well, everybody else watches real close and figures out that he was right, all along, and they (gradually, and often with strong resentment) shift over to pick up the threads that led to his eventual success. There’s nothing wrong with this process or the outcomes that it drives, other than that it makes people angry at each other in the meantime.
I think that we’ll figure it out, and we’ll manage. I could be wrong.
But in the midst of all of this, I’ve been struggling to work out what I actually, personally, think of the technology that is disrupting my industry, and which has the potential to be *all the more* disruptive over the next 5 to 10 years.
And I think I’m there.
(Which means I’m finally ending the introductory section. Hang tight, y’all. This is going to be one of the longest pieces I’ve posted, I suspect. Nature of the beast.)
Let me start with this: I genuinely, passionately, and irrevocably believe that we have misnamed AI as a species. Okay? I’m going to use the generally-accepted definition, from here forward, but I want to at least put my mark in the sand and say that what we are playing with is NOT AI. It is a set of sophisticated predictive algorithms based on larger and larger databases of information with more and more sophisticated analysis of that information. This is not “intelligence”, artificial or otherwise. It is a tool built by a human mind to do computations that that human mind has programmed it to do. In my work (to date) I have always had a requirement that AI must be able to modify not just the databases of information that it has access to, but *its own software*. It must be able to *learn* how to *think*, not just think about more stuff faster. There are a lot of existential questions in this definition about what makes a human different from a thinking machine, and there are a lot of gotcha questions (what is a human if not simply a synthesis of all of the information put into it???) that I don’t want to get derailed by, as that is not the central thesis of what I want to talk about here, but I think that asking ‘is it self-aware’ (which is coming up *a lot*) is a clear indication in my mind that people are unable to understand the distinction between a sophisticated predictive algorithm and software that can self-modify. Self-modification is part of agency. It infers *choice*. Sophisticated data analysis just allows you to sound exactly like a human being in the generic (or the specific – write an essay in the style of…). I find this of *enormous* importance, when we discuss the nature of AI, but it is entirely irrelevant to the rest of this thought-stream. Fin.
AI started to be a part of my awareness with image-generation. As I mentioned, human faces. And then art. And then came the apocalypse of cover designers who think that their industry is going to be eradicated by AI cover generation, which is a *fear*, but not an argument, and I started making the argument that if you are using a database of images as source input, and those images have legitimate IP holders (copyright), then all images that come *out* of that database are copyright infringements. I continue to hold this view. There are image databases that have done a genuine best-effort to only use public-domain art, and if that were knowable and true, I would have no reservation using this type of tool for for-profit image generation, so long as the tool does not attempt to claim that copyright. The fact that a technology decimates a productive industry does *not* make it evil. Or even bad. In an economic mindset, all of those people who were occupied within that industry are now going to go find something more productive to do with their time, as the work that they *were* doing has been lifted off of them. In the reality, there’s a very real, very disruptive, and very uncomfortable process to finding a new productive use of your mind and body, and for some people, their personal lives may never be as good as they were before the disruption. But in the aggregate, the *world* is better and more prosperous because the disruption happened.
And, yes, this may happen to authors, too. I see it, and I’ll get there.
The more moderate viewpoint is that what AI art is going to do is it is going to free up these productive minds to generate *enormous* quantities of high-quality art much faster, driving down their prices at the same time that it drives up their profits. And some artists may not make that corner. But the ones who do have the potential to be even more prosperous than they were before.
This is what I believe, within the specific domain of cover art.
Incredible covers are coming, as we master the technology and the methods of implementing it, and I am ridiculously excited about it.
And then AI came for the writers.
ChatGPT is not the first language database trying to parse language and react to it. Spellcheck is very much the same technology, in the sense that charcoal was the predecessor to pens, which were the predecessor to word-processors. So anyone who looks you in the face and says they use NO AI AT ALL is giving you a piece of information that is relatively worthless, because they haven’t defined what AI is, and it is a broad, fluid category of tools that are available to varying subsets of the population. What they *mean* is that they chose every word in the piece that they have written, and that they did not use a tool that was intended to change the *quality* of that language so much as put it back at where it was intended to be. That is, they used tools that fix their mistakes, but not improve their choices.
It’s a fine line. I might go so far as to say I *like* that line. It’s clean. You know that the words you are writing are from the mind of an individual human being, not the slurry of human awareness that is represented by a more sophisticated and involved AI tool.
But it’s also a limiting line.
Because tools are *tools*, and they can improve what we do, both in quantity and quality, and saying that I drew this picture *fully by hand* rather than using a ruler for the straight lines is a rather absurd brag. There *is* extra value in having the original copy of a piece of art, but using a machine to create many additional copies of that piece of art means that I can have a masterpiece on the wall in my office. Yay for everybody. Artist makes more, because more of his art can go into the world, and the world gets more art.
And art has value.
So using AI beyond the efforts to make things as you intended means that there may be potential increases in both quantity and quality (see cover art) and this increases the quantity of art in the world as well as the quality of art in the world.
This is not a BAD thing.
It may even be an all-the-way GOOD thing.
Depending on how it is used.
And here is where I have been struggling, and why I am writing a warm-take. This isn’t a hot-take. I’ve been chewing on this – a LOT – for weeks. It’s not a spicy take. I don’t have a lot of emotions wrapped up in this analysis. But it also isn’t a cold take. I may very well decide that I missed some key details and swerve away from this stand *entirely*, and it will not bother me at all. I have no vested interest in standing by this analysis and saying it is right, and every other analysis must agree with it or be flawed.
How do authors use a tool like ChatGPT or Sudowrite or any of the other blossoming writing tools out there and keep the faith of their audience? How do they keep faith with themselves? What is the *point* of art?
These are HARD questions.
And I’m hanging out with people who are out, exploring the edges of them. A number of them *may* be out beyond those edges. I’m not ready to cast judgments, but I do have concerns.
And… herewego.
My hot-take reaction was that I felt like my readers would be disappointed if they found out that a machine was creating substantial contributions (plot decisions, character voices, narrative voices) within my books. Even more disappointed to discover that a machine was creating *most* or *all* of those things. And I don’t want to disappoint my readers.
(Another aside – IKNOW – I do not use any tool beyond Microsoft spellecheck at this time. I am looking at ChatGPT and other tools to write back-cover copy (blurbs) and advertising copy, because I’m terrible at those, and because the purpose of those is to summarize and entice the creative content, not be creative content unto themselves. I am not opposed to using ChatGPT for copy editing, to highlight what it believes is *weak* or redundant language for me to see whether or not I agree. This is in the realm of ‘editor’, and having an editor who represents the generic language and usage of human beings is not inappropriate. With the technology as it exists today, using it for much more than that – the actual creative side – would slow me down, because I am a very fast and very decisive writer, and I don’t use outlines. In the future, this balance may change, and I will be evaluating tools in the context I’m going to discuss below – what is the right use of tools in art. As it sits today, it’s a very simple, tactical decision. Writing tools are a back-end, not a front-end process.)
But these things exist on a continuum, and ‘letting a machine write the whole book’ is a universe away from ‘did I find every instance where I typed though instead of through’. And somewhere between here and there, there’s a limit on what we *should* be doing. Or there isn’t.
The first and most important question is: what is art for?
And we’ve been fighting over this for probably as long as we’ve been making it. The literary world looks at the genre world (me!) and says that what I’m doing is *not* art, because it’s only there to entertain. And I shrug that off with massive indifference, because I think that genre art often has a lot more freedom to look at what it means to be human because we aren’t constrained by the burdens and the details of the real world and its infinitely complex history. Look at how fraught it is to write a black character or a gay character or a disabled character in the serious-writing world, and the arguments over who is even *allowed* to write those characters, and compare that with a fantasy world where racism is about species and not races. No one can say what the lived experience of a leprechaun is, so we can actually talk about leprechaun-centaur relations in the general, without getting caught up on the details.
And art doesn’t have to be *topical* to matter. Art is just there to talk about what it is to be human. I have an instinct that ‘beauty’ and ‘human’ are basically different ways of trying to define the same thing.
There is a versa argument to this, though, and that says that art is there to *entertain*, and art that is not entertaining really doesn’t have much purpose to it, because it’s not reaching anyone. (Entertain does not mean ‘make happy’. You can be entertained by your grimdark or your romantic tragedy or your intense condemnation of the human existence, and it doesn’t mean you close the book and sigh contentedly. But you read it because it stimulated your mind in a way that you wanted or needed, and that is entertainment.) Art that is entertaining sells. Art that does not entertain does not sell. And to take that backwards, if you are selling, you are entertaining.
Which makes sales into virtue.
And from a cold, heartless economic perspective, I believe in this, to a 99% level. Not all entertainment is good for you, therefore not all voluntary economic transactions are of net benefit to human prosperity. Some things that you can buy *will* hurt you, and the fact that you volunteered doesn’t mean that it was virtuous. (Though the fact that it was not virtuous also does *not* mean that someone should have intervened to stop you. The world does not have to be divided into “virtuous” and “illegal”; it is okay for there to be space in the middle, because we *still* need those guys who are going to run off and try something stupid, and maybe die, because *they could still be right*.)
So when writers in my community say that they intend to generate 2000, 5000, 10000 titles a year using ChatGPT to spawn the content, the community *IMPLODES* violently, saying that they’re going to swamp *real* writers and destroy the industry and destroy lives and impoverish people and…
Nobody who is profoundly angry about this says: you’re not going to make any money at that. (Plenty of people *do* say that. But they aren’t angry. They aren’t dawn-seekers nor doom-sayers.) The people who think that this is going to end humanity react with credulity that this method is going to generate revenue. Lots of it. (There’s a third argument that putting 100M titles a year up onto Amazon that are all incoherent Chat-spawns is going to implode the industry to $0, but Amazon likes money too much for that to work. I have faith.) And that revenue is going to come at the expense of people who only write one book a year. It’s going to STEAL all their readers.
Cold, heartless economist: SO WHAT?
If people are “entertained” by what they’re reading (and it isn’t hurting them), why would a good-faith human-prosperity-seeking analysis argue that it’s even a bad thing?
And that’s where I’ve been stumped.
HOW IS THIS A BAD THING?
Isn’t it good that people can find the pinnacle of their consumption value at the lowest possible cost? What if every book was 25c? So they could afford to read as many of them as their hearts desire, and they would all be exactly what they want? Isn’t that *perfection*?
And it is.
Until you consider the intersection of art and machine.
Art is culture. It is what it means to be human. And it is strikingly individual.
It takes a billion minds to make up a human culture, and a lot of them are off thinking things that other ones would never even consider. That some of them would even be repulsed by. But those thoughts, those free-jumping, off in the weeds, dawn-seeking thoughts are what drive forward what it means to be human, particularly in an age of humanity where our context is changing so radically, generation by generation.
Cellphones mean that we can communicate with anyone, anywhere, any time. Should we? Who? When? Why?
Computers do the work that it used to take entire companies to do, and they do it quietly and in the background.
What should humans put their effort into, when they aren’t required to labor for survival?
What purpose does growing flowers have?
Why do humans play sports?
The problem with using a machine to express culture is that that machine has a normal-distribution of outcomes that ought to be relatively predictable. It’s based on the inputs it used to train. And there is, AT THE VERY BEST, a benign bias to that. The culture that we will consume is limited, narrowed, and predicted by the learning algorithm that fed it, when we only consume machine-generated art. As soon as people who CARE figure that out, though, there WILL (WILL, WILL, WILL) be a malignant bias introduced. Because men WILL NEVER REST from trying to perfect the men around them. It is often introduced as ‘in your best interest’.
And it may be a good faith, best attempt at perfecting human beings through their culture.
But it’s the same argument that anything that isn’t virtuous must be illegal.
It ropes in and contains all of the dawn-seekers before they ever set off.
And there are amazing horizons out there that you and I are going to die before anyone ever discovers them.
In MOST ways, we are living out on bright horizons our ancestors never dreamed of.
Dawn-seekers die a lot more often than those of us living in the comfy wisdom of conventional decisions, but they also discover all the best ideas.
So.
Should people be restricted from or shamed of creating 10,000 ChatGPT titles and putting them on Amazon to sell?
No.
I can’t make that argument, and I won’t.
But is it good art?
Absolutely not. Because that machine, creating CULTURE, is only hitting the mean of the things that it has been trained on to predict with. And I don’t trust that that cultural mean is benign. Even more, I don’t trust that it is going to *stay* benign.
So. It is tempting to make a promise, as a writer, about the things I WILL NEVER DO. But I’m not going to.
This is a warm take. I’m still learning and I’m still watching. I think that tools that make things happen faster and better are almost exclusively good, in the context of broad human prosperity, and I think that using those tools in a way that does not over-shape art is almost uniformly good. But I’m concerned that there are going to be a lot of times and places where those tools replace the human soul, and that is a net loss. And somewhere in the middle, there’s going to be a line that I solidify down onto, but I don’t know where it is yet.
If you’ve made it this far, my profound appreciation to you. These are exciting, troubling times, with all the upside potential and all the downside risk still in play. The solution is not to shut down everything and stay as we are, but to allow experiments and discussion to try to capture as much of that potential and warn off as much of that risk as we can. Thanks for going with me.
-Chloe