How Creativity Survives in an AI Monoculture

Image: a dark green felted top hat accented with small colorful feathers and outfitted with steampunk-style eye goggles sits atop a bulb-shaped wire cage that holds a dimly-lit antique lightbulb.Photo by Johnny Briggs on Unsplash

Today’s post is excerpted from Quiver, Don’t Quake: How Creativity Can Embrace AI by Nadim Sadek, founder and CEO of Shimmr AI, an AI-powered advertising platform.

I met Sadek in person at NYU’s Publishing Institute in January, and found him a warm, self-aware, and very human founder of an AI startup. His latest book tries to balance an optimistic take on how creative people can use AI, with a lucid assessment of its risks. For my readership, I’ve chosen to excerpt a portion about the risks.

In 2023, Manhattan attorney Steven A. Schwartz filed a federal court brief citing six judicial decisions, each seemingly perfect for his case. But every one of them was fictitious, hallucinated by ChatGPT, complete with plausible names, dates, and legal reasoning. When the deception was uncovered, the court sanctioned Schwartz and fined him $5,000. The incident became a landmark cautionary tale about the dangers of relying on generative AI without verification, particularly in fields where factual integrity is of paramount importance.

The first and most immediate set of dangers we all face regarding AI are practical ones. They’re the gremlins that live inside the current generation of AI models, the bugs in the system that can have serious real-world consequences.

The most famous of these is what’s termed hallucination. It’s what we identify when an AI confidently and articulately invents facts. It spits out nonsense. Because an LLM’s primary function is to generate statistically probable text, not to verify truth, it has no internal concept of what’s real and what isn’t. If it doesn’t have the correct information in its training data, it won’t say “I don’t know.” Instead, it’ll often generate a plausible-sounding answer that’s entirely fictitious. This has moved from a humorous quirk to a serious problem.

When I wrote an earlier book, I remember asking AI to do some research—to find other authors with a similar thesis to mine. Crestfallen, I looked at a long list of titles which seemed to occupy very much the same space I was in. Then I looked at their publication dates—all in the future! For any creator using AI for research—a journalist, a historian, a non-fiction author—the danger is clear. The AI is a brilliant research assistant, but a terrible fact-checker. Every piece of information it provides must be treated with suspicion and verified independently.

The second gremlin is bias amplification. At my company Shimmr AI, where we produce autonomous advertising using AI, I remember reproaching our Chief Product Officer about why our nascent video-forms were always much more convincing when the protagonist was a woman. She chided me. “Where do you think video generators have learned to produce credible renditions of women moving?” Well, the answer is from browsing the internet and capturing all the videos it can find. Pornography accounts for much of that. And it’s mainly women who populate pornography.

AI has learned to render women in videos much more convincingly than men largely because that’s what it’s found to train on. AI reflects the totality of its training data. The problem is that our digital world isn’t an unbiased utopia; it’s a reflection of our flawed, unequal societies. An AI trained on the internet will inevitably learn and reproduce the biases it finds there. If historical data shows that most CEOs are men, an image generator prompted with “a picture of a CEO” will overwhelmingly produce images of men. If online texts more frequently associate certain ethnicities with crime, the AI will learn that toxic correlation.

The danger isn’t just that the AI reflects our biases, but that it amplifies them, laundering them through the seemingly objective voice of a machine and presenting them as neutral fact. This can entrench stereotypes, poison public discourse, and cause real harm. It’s one reason why I advocate for everything we’ve ever created and produced—properly recognized and remunerated—to be included in AI training. We have an active role to play in producing ethical AI.

The next set of dangers are more subtle, but perhaps more corrosive in the long run. They concern what might happen to us, the human creators, as we become more and more reliant on our sophisticated new partner.

A friend recounted to me that she watched her son, 20 years old and bright as a button, working on a university assignment. He’d typed a prompt into ChatGPT: “Write a short essay about how HR can fail a corporation.” The AI delivered five perfectly structured paragraphs. He tweaked a sentence here, added a date there, and submitted it. Time elapsed: twelve minutes. Understanding gained: zero.

Many of us fear the dereliction of committed learning that is an obvious risk in the era of easy-AI.  Every tool that makes a task easier carries with it the risk that we forget how to do the task ourselves. We use calculators and our ability to do mental arithmetic fades. We use GPS and our innate sense of direction withers. The fear is that a generation of creators who grow up with AI as a constant companion won’t develop the foundational skills of their craft. Will a writer who’s always used an AI to structure their arguments ever learn how to build a narrative from the ground up? Will a musician who’s always used an AI to generate chord progressions ever learn the fundamentals of music theory?

This isn’t a Luddite argument against using new tools but a caution about the potential for our intuitive capabilities to atrophy. Creativity is a dance between our intuitive, associative spark and our analytical, structuring work. If we outsource all of the structuring, the editing, the refining to the AI, what happens to our own analytical capabilities? More importantly, what happens to the crucial interplay between the two? The process of wrestling with structure, of hitting a dead end and having to rethink your argument, is often what forces the most interesting intuitive insights to the surface. By taking away the friction, we risk taking away the fire.

This leads to a related fear: the homogenization of culture. What happens when millions of creators, from students writing essays to marketers creating ad campaigns to artists generating images, all start using the same handful of AI models? There’s a real danger that the output begins to converge on a bland, generic, AI-inflected mean. We may see the emergence of a new monoculture, where art, writing, and music all share the same statistically-probable, algorithmically-smoothed-out feel. The unique, the quirky, the truly original voice—the very things we value most in art—could be drowned out in a sea of competent but soulless content.

Quiver, Don't Quake: How Creativity Can Embrace AI by Nadim Sadek (cover)BookshopAmazon

Would a neural net have produced Being John Malkovich, a film about a portal into an actor’s consciousness hidden behind an office filing cabinet? Or Eraserhead, David Lynch’s surreal debut about parenthood, dread, and an oozing mutant baby? Almost certainly not. These works are weird, jagged, and defiantly human. They were born of obsessions, neuroses, and vision that no probability model would prioritize.

My belief is that the antidote to this AI slop (as some have been calling it) is that we’re endlessly eccentric, each human being communicating and manifesting in unique fashions. AIs respond to inputs—prompts—and so long as we each allow our intuitive side full rein, then the interactions produced in collaborating with AI will always result in idiosyncratic, unique outputs. We must continue to be us.

Note from Jane: If you enjoyed this article, check out Nadim Sadek’s new book Quiver, Don’t Quake: How Creativity Can Embrace AI.

 •  0 comments  •  flag
Share on Twitter
Published on September 23, 2025 02:00
No comments have been added yet.


Jane Friedman

Jane Friedman
The future of writing, publishing, and all media—as well as being human at electric speed.
Follow Jane Friedman's blog with rss.