I do not use AI for writing in any shape, way or form. I use Grammarly for grammatical assistance while writing, but I have not (and will not) use either its “rewrite AI” (called Grammarly Go) or Chat GPT because these systems are mindless.
If someone uses AI when writing, I think they’ve misunderstood the point of writing itself. Writing is an art. The construction of a sentence, a paragraph, a chapter, a character, a plot arc, etc., are about making seasoned judgment calls and thinking about pacing and progression toward a goal. This is waaaaaay beyond the capability of anything AI spews out.
The whole notion of Artificial Intelligence is a misnomer. There’s no intelligence there at all. In fact, I’d argue there’s a complete lack of intelligence by AI and anyone using AI. The Cambridge Dictionary defines intelligence in this way…
Intelligence: the ability to learn, understand, and make judgments or have opinions that are based on reason:
Systems like Chat GPT have the ability to (machine) learn by consuming vast amounts of information, but they do not understand the information they have collected. They have no ability to make judgment calls and cannot form opinions based on reason; therefore, they are not intelligent. This is perhaps best illustrated by the concept of AI hallucinations. These are a case of “garbage in, garbage out.” When AI is on solid ground, meaning something that is well understood and well documented, it can produce meaningful content, like answering questions about physics, etc. But as soon as AI gets into obscure areas where genuine insights and rational thought are needed, it will fabricate well-meaning answers that can be completely bogus and misleading. And the reason is simple. There’s no understanding. There’s no reasoning. And this, in my opinion, makes AI dangerous as it can (and will) become a source/promulgator of misinformation. It will be misused by bad-faith actors to deceive people.
I have not and will not consent to any of my works of fiction being used to train an artificial intelligence, as even the notion of training is misleading. AI is not being trained. It’s simply undergoing complex pattern matching with the view to mimicking genuine content without ever actually producing genuine, original content.
AI is the modern-day equivalent of the Chinese Room thought experiment. Over three hundred years ago, back in 1714, German mathematician Gottfried Leibniz first challenged the notion that the human mind was just a complex machine and that it could be understood and replicated using mechanical processes. In 1980, John Searle expanded on Leibniz with what became known as the problem of the Chinese Room, meaning if you could feed enough Chinese information into a closed room, could someone without any knowledge of Chinese eventually give back meaningful sentences in Chinese without ever actually understanding Chinese at all? AI has proven that’s possible, but it’s not the success most people think it is. The point of the Chinese room is that this is done with NO understanding. And where there’s no understanding, there’s no value.
“Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious.” — John Searle
Wikipedia entry for the Chinese Room
The ability to mimic intelligence is meaningless without intent. In nature, mimicry serves a purpose. The gopher snake mimics a rattlesnake to avoid predators. The anglerfish mimics a clump of seaweed, luring prey close enough to strike without warning. Artificial Intelligence, though, serves no purpose beyond plagiarism. Why are companies investing so heavily in AI? To make money. But how? By mimicking actual creativity without adding any real value. To my mind, AI is parasitic.
I think there is a place for Large-Language Machine Learning Artificial Intelligence in supporting scientific research. AI’s ability to connect-the-dots and find obscure, hidden relationships is invaluable when dealing with hundreds of thousands of peer-reviewed research papers where human reason and actual understanding have already been applied to complex topics. But note the difference in motivation. In this case, AI is being used to advance our scientific understanding. The problem is that’s not something that can be monetized, and money is the ONLY thing AI companies are interested in. They’re not actually interested in artificial intelligence; they’re interested in shortcuts to corner a market. To me, that’s shortsighted.
It’s said that “Fire is a wonderful servant, but a lousy master,” and I think the same is true of artificial intelligence. In the right context, AI can be a wonderful servant to humanity, but to have it in the driving seat is a mistake; to surrender our intellect to a machine-learning algorithm that neither understands nor cares about humanity is madness.