Bill Conrad's Blog - Posts Tagged "ai"

Banning AI

When a society does not like something, we take steps to remove or curtail it. This effort might include a public information campaign, laws, and open discussions. This backlash is now occurring with Artificial Intelligence, and people want it regulated, banned, or limited to specific applications. I think this will be a hot topic in the coming elections, and lawmakers are working into the late hours to develop new laws.
What is the core problem? There are three main issues. The first is the fear that AI will replace learning. This issue is nothing new. I remember teachers being upset that I used a computer to write reports when they wanted a hand-written document. Before this, they were upset that I used a calculator for arithmetic. My father probably got scolded when he used a slide rule, and my grandfather for getting facts from books. Teachers would say, “You are not learning the hard way.”
The second problem is that computers and robots have replaced many jobs, such as auto assemblers and cashiers. Thus, it is natural to fear AI replacing jobs such as editors, writers, or teachers.
I certainly do not want to compete with AI, but that argument has flaws. People built (as part of their job) robots and computers. Once installed, technicians had to service them. Later, technicians had to upgrade or replace them with newer models. So, computers moved jobs from one group to another, but there is no denying that jobs got replaced.
The third reason is that AI will replace our souls. Chat GPT (an AI language program) can write just like a human, but there is a long way to go because AI can only do what we ask. “Write a story about a race car.” Only a human to ask the AI program to write about that topic. What about a random topic generator? Well, what use it that to you? Are you interested in the topic of industrial carrot processing?
We must also consider the output quality of a program like Chat GPT. Let’s ask a seasoned race car driver like Jackie Stewart a racing question. His answer would be based on learning how to dive, winning races, and making mistakes.
“Hey, Chat GPT. How do I drive my race car faster in corners?” AI can process every racing book, graph, chart, and race data. But an expert driver has raced, can see the entire picture, and look at the person as they answer. This thought process includes seeing the car, the track, the conditions, the driver’s body language, and the other drivers.
The answer might be a simple “use less brake before entering the turn” or a complex aerodynamics discussion. The human answer will be much more effective, appropriate, and valuable. However, the Chat GPT answer might have a better writing technique.
AI has upped the game, but we must remember what it was like when computers like the Macintosh introduced the graphical work environment. This invention was more intuitive, easier to use, and more powerful. “Soon, the Macintosh will take over our minds.”
People mostly understand my three talking points but still wish to turn back the clock with legislation. I would argue that the bomb had already exploded, and we must deal with the aftermath. Trying to apply laws to AI is like trying to un-explode a bomb.
The real problem is how best to use AI in our everyday life. This situation is like when my father purchased a personal computer for the family. My father and I used it, while my sister and mother did not.
Teachers, bosses, and workers have realized the dangers and advantages of this new technology. Some embrace it because it makes life easy, and others dislike what it has done.
I would also like to remind you we have tried to legislate away computers in the past and failed spectacularly. Look no further than 1990 Operation Sundevil:
https://en.wikipedia.org/wiki/Operati...
I recall one legislator said (at the time) (I could not find a reference for the details), “A kid with a modem is more dangerous than a kid with a gun.” People were terrified of the coming computer revolution and what incredible damage was about to occur.
In conclusion, I think it is a better use of our time to embrace this new technology, find jobs for those laid off, and figure out how to use AI in our daily lives. But perhaps we could use AI to solve the very problem I have brought up. “Hey, Chat GPT. Please write a law banning the banning of AI.”

You’re the best -Bill
July 12, 2023
 •  0 comments  •  flag
Share on Twitter
Published on July 12, 2023 10:05 Tags: ai, fear, laws, society

Company Fires 60 Strong Writing Team

I came across this news article:
https://www.techspot.com/news/103535-...
The article stated that a company developed a seasoned writing team to promote their product by writing blog posts. All was good until they laid the writers in favor of an AI generator (like ChatGPT). The only people left were editors who had to tidy up the blog entries, so they looked like a human had written them.
Should I be angry? Hey! They fired a bunch of fellow writers! Not cool! The truth is that this story did not surprise me at all, and I felt no emotion. Why?
A friend (who is not a computer expert) developed ChatGPT scripts to do his entire job. He takes the company’s latest reports and writes blog posts, emails, and tweets. Plus, he spots issues and recommends improvements. ChatGPT allows him to do a day’s work in minutes. And the result? His bosses are very pleased, but he would get fired in a heartbeat if they knew what was going on.
ChatGPT is the perfect program to do what those 60 people were doing. Take boring company junk and turn it into enticing blog posts. The people reading the blog will see the latest company news in an easy-to-read format.
What about those who got fired? I am sure they were creative and talented people who were proud of their words. They had families dependent on their income, and I know these writers were angry for being laid off. Yet, this is not the first time a new technology has led to job loss.
I recall a story by my former coworker who passed away in the mid-90s. She was an incredibly talented database programmer hired to upgrade a large retail store chain’s inventory/ordering/payroll/accounting system. This contract job replaced giant mainframes with smaller but more powerful modern software and hardware. She developed a relational database and Windows program that allows quick interaction. Her system replaced a vastly outdated text file database and thousands of dumb terminals.
This effort took six months, and the results were fantastic for the employees, customers, and company profit. Yet, before the upgrade, the company had a four-story building with 120 employees, several mainframes, and one entire floor dedicated to nine-track tapes. (Remember those “high tech” computer scenes in old movies where the two tapes spun back and forth? They are nine-tracks.) Imagine the size of their electricity bill.
The entire building was replaced with a single programmer (to maintain the system and add features) and a single modern server. I am sure those 120 people were spitting nails upset at losing their jobs. This speech from the excellent movie “Other People’s Money” sums up their situation:
https://www.americanrhetoric.com/Movi...
Could these employees see layoffs coming? The 2015 documentary “All Things Must Pass” described the downfall of Tower Records. Nobody at Tower saw a future where people could download music, yet the millions of people downloading music certainly saw the future.
If the Tower Records employees or management had applied any effort, they could have predicted their job loss. “Hey, look at this. People can download music. Time to update my resume.”
Well, what about me? Programs like ChatGPT are getting more powerful every day. You know my book, Interviewing Immortality? (Please download a copy!) I bet if you gave ChatGPT the summary, it could write a story just as well. Girrr. I must admit, this is a true statement.
Want proof? I have read several books that were clearly written with ChatGPT, and here are two examples:
ChatGPT for Writers by Saif Hussaini
AI Mastery Trilogy by Andrew Hinton
What ticked me off was that in the book summary/blurb, the author made no mention that ChatGPT wrote their creation. I have seen enough ChatGPT-generated content to recognize its writing style and have a spoiler alert. You will soon have the same magical ability. Want proof?
Way back when books were not printed, scribes copied them. Then, the printing press was invented. The result was unflattering because printed letters were square and not created by humans. Boo! Try harder. Then, the typewriter was invented. If you received a letter, it was clear that it was not printed; a typewriter made it. Boo! I want the neatness of a printing press.
Then, the computer was invented, and people wrote letters using a word processor and printer. Boo! Look right here. The font changed. I want to read a letter created on a typewriter.
ChatGPT has already invaded our lives. Are you talking/emailing/chatting with a real person? ChatGPT or some other AI is taking your fast-food orders, calling you on the phone, answering your technical questions, providing limitless entertainment, or conning you out of your hard-earned money.
Yet, I remind you that this is just the beginning. Remember the invention of the IBM PC in the late 70s? Yes, there were many issues, but with some developments, we now have today’s astounding smartphones, gaming PCs, internet service providers, and thousands of AI computers chugging away.
We figured out the IBM PC and will figure out ChatGPT. This means that, like in the 20s, when people figured out they were reading a typed letter, people will learn to recognize when they are not interacting with a human.
Will those 60 people be hired back? Probably not, but the arm will swing the other way. The company that fired those 60 people will soon have upset customers. “This blog is pure AI. I’m not shopping here.”
What does it all mean? If you see a future where AI will take your job, it might be time to update your resume. Also, it is now essential to recognize AI-generated content. Fortunately, I will be here to provide you with AI-free content.
PS, I got a spam message today for a service that uses AI to generate blogs. “100% original content” It made me laugh.

You’re the best -Bill
July 17, 2024
 •  0 comments  •  flag
Share on Twitter
Published on July 17, 2024 10:17 Tags: ai, chatgpt, the-future, writing

Whom Does the World Belong?

I randomly found the article “Whom Does the World Belong?” It begins with a peculiar copyright lawsuit. In the 20s, a person (allegedly) telepathically communicated with the dead. This sparked interest, and a book of conversations was published, The Urantia Book. This popular topic led to more books in the series with additional telepathic conversations.
https://www.bostonreview.net/articles...
In the 80s, a woman scanned in one of the books and distributed it for free, causing legal troubles. The defense argued that since the dead were speaking, their words did not belong to the author. The plaintiff could not argue against the defense without admitting the book was fiction. Brilliant! The jury agreed.
The main article focuses on the following: Who owns artificial intelligence-generated content? It focused on two main issues. Many files, including copyrighted material, are required to train an AI model. The second issue is that the people who developed the software and paid for the computing power expect something for their efforts.
As you can tell by my chaotic writing style and deranged content, I do not use AI to write. Writing is supposed to be fun, but auto-generating a pile of hogwash does not fit that bill. Not everybody agrees with my altruistic attitude, and AI applications like ChatGPT are now firmly in the driver’s seat of many publications, websites, and business documents.
This invention opens new territory in legal, ethical, and story areas, leading to a massive question of ownership. Even though I am not a popular author, I am sure my limited words have been used to train at least one AI model. Unfortunately, writers cannot prevent automated systems from scoping up every internet word.
I would be pretty upset if an AI user asked, “Develop a first-person psychological thriller story with a few intense scenes about a less-than-perfect author who is captured, forced to undergo a bizarre medical procedure, and interview his 500-year-old woman female captor.” and then the original text for my book, Interviewing Immorality, was “generated.” Alright, truth. It might be cool if my book provided 100% inspiration.
Passing along my exact words and concepts as somebody else’s is unethical. Therefore, I feel that legislation should be enacted to prevent this. The politicians agree; some are working on new copyright laws addressing AI. The problem is that AI technology muddies the water.
For example, anybody can copy one of my books into ChatGPT and ask it to “freshen up the story,” “change the characters,” “update the text,” or “improve the writing.” Legally, it would be difficult for me to argue with the results because, while the story would be nearly identical, the words would be different. How many romance books are out there? Boy meets girl or girl meets boy. Story bedrock is close to the surface; my book is no exception.
Conversely, those AI programmers and companies paying for server time deserve something. Millions use ChatGPT and the generated words have value. Thus, the people who worked hard on their creation indeed have the right to own the content, just like my books belong to me. This is the present ChatGPT content agreement:

As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the output. We hereby assign to you all our right, title, and interest, if any, in and to output.

For now, they offer free services and allow users to own the content. Yay? The problem is that this could change in a heartbeat, so users must check every time they use the service.
There is an obvious solution. The first page of my book and others contains an explicit copyright notice. Interviewing Immorality belongs to me because I wrote it, and does not contain any AI-generated words. Websites like ChatGPT also have clear legal notices concerning the content they generate. Of course, people ignore this legal mumbo-jumbo. “Click if you agree.”
Thus, if an author publishes AI-generated text, they must acknowledge generated words. Yeah, no. I have read several new publications that were clearly created with AI, but there was no warning. How do I know? ChatGPT has a distinctive writing style.
And am I guilty of not giving credit where credit is due? I recently wrote “Are Today’s Writers Spoiled?” I included a big chunk of ChatGPT content in that article, but I prepared readers with the following statement: Alright, I’m getting lazy. “Hey ChatGPT. List the problems facing modern authors.”
Thus, I correctly informed readers that ChatGPT generated some content. I felt the result was ethical, and no readers complained. Yet an open question remains. Who should take credit? I would argue that I was the creator, and the present ChatGPT content statement confirms this. I anticipate this will no longer be the case.
There is no doubt that AI-generated content will be everywhere. It is so bad I predict that a document without AI content will soon be a rarity. Is a sea of AI-generated works a bleak future? As a struggling author, I wish somebody put this genie back in the bottle. As a person, I must accept an AI-generated future.

You’re the best -Bill
January 15, 2025
 •  0 comments  •  flag
Share on Twitter
Published on January 15, 2025 16:53 Tags: ai, copyright, ownership, writing

Duplicated Books Are Now for Sale

I recently read this article:
https://www.cbc.ca/news/canada/edmont...
In it, an author warns everybody about a disturbing new trend. Unscrupulous individuals are downloading copyrighted works, feeding them to an AI chat box, and publishing the results as their own. This new trend is only going to worsen, and readers, authors, copyright holders, and publishers are already suffering the consequences.
This is not the first time technology has had a significant impact on the job market or society. A good example is computerized cash registers. Before this invention, cashiers were required to memorize hundreds of prices and know how to operate a daunting mechanical device. Suddenly, anybody could do this job with only minor training, resulting in reduced wages and limited job security.
How does a person use AI to rewrite a book? After obtaining the raw text, the “author” sends it to an AI chat box with a simple prompt like, “Rewrite the following book to be livelier.” “Enhance the following document.” “Alter the following novel to obscure the original writing style.”
Suddenly, BAM! Anybody can be an “author” of an astounding work. Harvy Pots and the Magician’s Rock. Catch 23. The Good Gibspy. The Tiger, The Sorceress, and the Closet. Tim Climpsy’s Search for the Green November. Dang, writing those titles upset me.
So, now what? Obviously, publishers like Amazon are tackling this issue with gusto! Yeah, typing that last sentence made me laugh. Well, at least if an author finds their books have an AI-generated version, Amazon has an easy way for the genuine authors to eliminate them. Wow, I am on a comedy roll.
Companies like Amazon aim to generate revenue. They do not care if a book came from a chat box, a ghostwriter, a computer hacker who broke into an author’s account, a cut-and-paste party, an illegally recorded interview, fake news, a random number generator, misinformation, disinformation, propaganda, state news, alternative facts, or the real author. There is no incentive for Amazon to investigate all incoming books. According to this article, Amazon gets 900-1200 new eBook titles daily. Where would the money come from to pay for a detailed screening?
https://www.quora.com/How-many-eBooks...
But authors could sue Amazon! I am on the biggest comedy role of my life. Well, authors could band together in a class-action lawsuit. I was part of a $95 million lawsuit regarding my smartphone and have been promised $20. So… I could spend two years writing a book, spend $$ editing, and get $20? Yay???
Well, there are copyright laws. The problem is that there are no copyright police. It is up to the author to locate the work, file a complaint, and hope some prosecutor has the time to take the case. And if the offending “author” is in a different country or has provided a false address? Then nothing. Thus, the best-case scenario in a major AI copyright case is that the offending “author” receives a fine.
Alas, readers must accept this new reality, but authors may have an ace up their sleeve. Readers are becoming increasingly adept at identifying AI-generated content. The sentences have a distinct flow, and chat boxes habitually use certain words. Most importantly, readers tend to dislike AI-generated works. Why?
There is nothing technically wrong with AI-generated material. It is grammatically sound, usually correct, and focused. The problem is that readers are people, and people want to read what other people have written. They feel cheated when they read something “genuine” only to find AI duped them. Even if a talented author uses AI in brief areas, readers still dislike the result. To me, the experience is like drinking diet soda when I wanted the sugar. “I did not ask for diet!”
Now hold on. Who is buying this AI junk? Lots of readers. I have found many recent books to be either partially or wholly written by AI. Why? Readers enjoy well-written material, and AI certainly delivers in that department.
These generated books feature fantastic cover art created by AI, along with a well-written description also generated by AI. So, let’s say that an unscrupulous author took The Great Gatsby and passed it through a chat box. What would I see if I were unaware of the original book? It would be a great story set in the roaring 20s about a complicated man. And the writing quality? Well, not the best, but certainly good. What about the guts? That is where AI falters.
F. Scott Fitzgerald is a celebrated author, and, more than any other story I have read, the symbolism in The Great Gatsby takes center stage. In short, a lot was going on, and the diligent reader would have grasped the nuances of the characters, scenes, and motives on multiple levels. Would all of that be present in an AI version?
I suspect those complex elements would be washed away. So no, the result would not read nearly as powerful. I think this would be like the comic book version of a classic story.
There is more bad news. AI chat boxes are dramatically improving. Soon, the generated sentences will be challenging for readers to identify, and they might prefer these polished gems. In fact, this is now possible with a prompt like: “Improve the following book and make the result read like a human wrote it.” Will AI be able to improve The Great Gatsby? Will readers prefer AI-updated books over the originals? Will readers soon demand AI-generated content? Reluctantly, I feel the answer will be yes.
Do I fear that somebody will feed my book into a chat box and then publish it under their name? Even though I have been writing about this topic for the last twenty minutes, in the back of my mind, I had not confronted this reality. So, I took a moment to ponder this concept. Yeah, that would suck. My answer is that I do fear it.
What would happen if I discovered that AI had altered versions of my books? After an unsuccessful attempt to get all the works taken down, I would stop writing. Why? What would be the point? Heck, the “author” could generate ten sequels in ten minutes and then publish them in twenty. I cannot compete with AI and do not want to.
Will this AI perversion make authors obsolete? Unless there is a significant backlash by readers, I cannot conceive of a way to avoid the AI title wave of books. This is a daunting prospect for both me, as a reader, and an author.

You’re the best -Bill
June 25, 2025
 •  0 comments  •  flag
Share on Twitter
Published on June 25, 2025 09:00 Tags: ai, publishin, writing

AI-Generated HOA Stories

Three weeks ago, I had a home owners association (HOA) issue. They did not like my gate color and sent me a nasty letter. Oh, the humanity! Of course, being a good neighbor, I have kept the gate and its paint in good condition. And it was the same color as when I moved in. How do I know? The previous owner left cans of paint with writing on the sides, such as “inside wall” and “outside gate,” which I have used to maintain the gate. Additionally, I have digital photos of the house for insurance purposes, which confirm that I have not changed the color. Side note: My neighbor has a gate five feet away with the same color, and the HOA did not send them a nasty letter. Typical…
The problem is that the HOA changed the official fence color. (Yes, somehow, they classified my gate as a fence…) So, I rode around the neighborhood and saw that half the gates or fences were the new color and the rest were the old color. (Why on earth would anybody care about gate colors???) And one was bright green…
Rather than make a fuss, I painted my gate the approved color. The entire episode irritated me, and in the process of searching for “HOA gate color rules,” I found many other people’s HOA frustrations.
There are not one, but three YouTube channels dedicated to HOA nightmares. Sign me up! I began watching the many outlandish HOA horror stories. Wow, their audacity! Quite entertaining.
The creators of these channels all followed the same pattern. They had an AI animated scene showing arguing people, a narrator explaining what the HOA did, and how the homeowners responded. Here is one such YouTube channel:
https://www.youtube.com/@HoaStories-k8
Last Thursday, I had a few minutes and clicked on a video. The HOA began charging a local rancher to drive through their neighborhood even though he had an easement granting access. The nerve! Well, the video named names and locations, so I searched the internet to learn more about the dreadful incident.
And what did I find? Umm, nothing. I then broadened my search to “HOA charges a rancher fees.” There were a few hits, but nothing matched. So, I watched the video again to gather more details, and that’s when I noticed it.
All the AI patterns: long-winded descriptions, precise focus, re-emphasizing the same topic, repetitive language, heightened drama, subtle mistakes, unusual English, and a lack of authenticity. It was all there.
When I looked at the comments, many people stated that it was AI-generated. The incident angered me more than my original HOA issue, and I blocked all the YouTube HOA story channels.
Why was I so upset? It was not real. I clicked on those links to learn more about what HOAs were doing and how homeowners addressed their HOA problems. Instead, I viewed AI-generated nonsense. What is so wrong with this type of entertainment? It is dishonest, like a machine is tricking me.
Now hold on. I have allowed myself to be tricked by a machine—for example, the Pirates of the Caribbean ride at Disneyland. I am not a pirate, and the ride was not in the Caribbean. Still, I enjoyed the experience because I knew it was fake in advance. Is this not the same thing?
The difference is that the HOA video did not have a disclaimer. And there was an insidious aspect of this video; it was pulling at my heartstrings. I felt like I was being duped by an online scam or a con artist. Not cool! But I have more bad news. This is just the beginning. Hyper-focused AI-generated stories are on the rise, and they are getting better. Is there any good news?
In past articles, I have claimed that readers/viewers are becoming increasingly adept at identifying AI-generated content. Like me, they are not happy about it. I now see more reviews like, “This looks like AI wrote it.” I assert that a major AI-generated backlash is forming in our society, and here is some proof:
https://www.msn.com/en-us/news/techno...
Yet, there is an added insidious element. When I was in college, I had a good friend who was also an electrical engineer. We attended classes and had a lot of fun off campus. So, it was only natural that we took Basic Electronics 102 together. I learned a great deal about the topic, and we gained a lot from each other’s prior electronic experience.
Fast forward to my second job, and I was working on a circuit. I couldn’t figure out what was going wrong, and my coworker was confused by my approach. So, we went to a whiteboard, and I explained how transistors worked. “No, you have that all wrong,” he said. I was adamant, and he pulled out a textbook. Dang, I was woefully incorrect.
I reflected on where I obtained my incorrect information and realized that my knowledge originated from the 102 class, where my friend explained how transistors worked. The thing is, he applied great effort to explain the topic; it made perfect sense. He answered all my questions and was confident in his knowledge. How did he get so confused? I had no idea.
Fast forward five years. My roommate got married, and I attended the wedding. During the reception, I asked why my friend was not at the wedding. Well… There had been a major falling out between him and our group. It turns out that he is a pathological liar. The light bulb went off. In college, he intentionally misled me. Evil!
The problem is that transistors are a fundamental part of electronics, and my foundation had a huge void. It took a lot of effort to rewire my brain (an electrical engineering pun) to think with correct knowledge.
Now, I have the same bad HOA information locked in my bonkers mind. In other words, my mental foundation has flaws, and I could mistakenly use this information. Society refers to this as “fake news.” Why is this a problem? I do not want to mislead anybody or think incorrectly. Yet… Here I am, loaded with AI junk.
What could happen? Let’s say I am in an HOA discussion and I mention the time “they blocked that rancher from using the road.” This sounds like a good example of an out-of-control HOA. The problem is that it is pure fiction, which means that the premise for my discussion is incorrect.
What if I wrote an article all about HOA problems and cited the incidents in the video? That would further spread fake information, causing all kinds of issues with the people who used my “information.”
Would my readers be upset? You bet! And would they direct their anger toward YouTube? No, they would be upset with me, even though my misrepresentation is not my fault. There would be no choice for me but to accept all blame and deal with the consequences and guilt of misleading my readers. Not cool!
And I am not alone. AI-generated junk is everywhere,* and here is an article discussing it:
https://www.thetimes.com/uk/technolog...
* “AI-generated junk is everywhere” sounds like the line from a song. Funny how life imitates art.
This HOA story incident was a wake-up call for me. From this point forward, I must apply great effort to identify this type of “entertainment” and avoid it at all costs.

You’re the best -Bill
July 16, 2025
 •  0 comments  •  flag
Share on Twitter
Published on July 16, 2025 11:03 Tags: ai, hoa, life-experiences

AI Scraping

I recently came across this article, but I must warn you that it is a cryptic read.
https://www.axios.com/2025/06/19/ai-s...
This article examines the issue of AI scraping, which occurs when an automated system downloads all relevant information from a website. AI developers use this data to train their models and generate content. There are multiple issues with this practice, including copyright infringement and server slowdowns. But first, let’s rewind the clock.
Way back in 2022, machine learning, also known as AI, was not yet a familiar concept to everyday users or websites. What existed was search engines. Companies like Google sent out vast queries all over the internet to locate data that their users might be interested in. Thus, if you searched for “spinach recipes”, Google would sort through its database of collected information to produce a list of sites that had “spinach recipes.”
Other entities also automatically gather data, including database companies, governments, criminals, hackers, and bulk data collectors.
This torrent of searches created problems for website owners who did not want their data removed or had slow servers. So, they placed an invisible file within a website that tells search engines: “Please do not automatically take my data.” Legitimate companies, such as Google, respected this, but unscrupulous entities did not.
The following line of defense is the CAPTCHA. I am sure you’ve visited a website, and a box appears asking if you are human. Sometimes these are a puzzle, such as reading text with lines or identifying which pictures feature motorcycles. Why always motorcycles?
Today, most automated data capturing is done by AI companies. Some go to great lengths to sidestep every possible attempt to prevent their systems from scooping up every scrap of data. There are even dedicated companies that collect data for sale to smaller AI companies.
The above article discusses CloudFlare’s efforts to prevent automated systems from stealing its content. Why is this important? Let’s say I am a big spinach fan. Love the stuff! So, I spent hours creating recipes, collecting them, comparing the results, and taking pictures of my delicious creations. Then, I post my hard-earned info to a popular recipe site.
After an AI scrape, all that knowledge is suddenly merged into an AI model, enabling it to become an expert in spinach cooking. Meaning that there is no need for a human to look at popular cooking websites. This is something every spinach-cooking expert wishes to avoid.
What does this have to do with me? Well, I am a (very minor) content creator. Yes, the humble words coming out of my bonkers mind have enriched this world a minuscule amount. Yay? And I would prefer that AI not take credit.
Do I spend my evenings worrying about this? After all, the things I care about are selling and protecting my books. So no, I am confident that no AI company would spend $2.99 to download one of my books because it contains little value. Yet, I do have concerns about my articles because they contain content I cherish. Let me explain.
I recently wrote an article discussing micro paragraphs, which is the new trend in sentence/paragraph writing style. A few people read that article, and as a community, we would call it a pea-sized bump in the infinite knowledge highway.
What did my article contain? From a high level, I clearly explained an observation, cited examples, and made a solid conclusion. During my research to create the article, I discovered that I had gained new writing technique insights, which translates to new knowledge for our planet. This type of content is what AI companies desperately desire, and this humble article is far more valuable than all four of my published books.
Why? My article was well-stated, on point, and incredibly relevant to AI training, making it valuable to both readers and those using AI. I guess that makes sense, but what about my other posts that were far less relevant?
Let’s examine my first article, “Why I Write.” At its core, it is an opinion piece. Spoiler alert! Many people write for various reasons, and mine are no exception. So, an AI data scrape would find zero value in my words. Right? No, even that article has great AI value.
Let’s say somebody asks ChatGPT to “list reasons why an author would write books.” Then, a processor in some dark room would search its vast database for relevant topics (including my article) and compute an answer. Although there are thousands of sources on “why authors write,” my article still holds great value. This is because it was singularly on point, not too long, and readily available. As compared to, let’s say, an entire book by an author who spent 20 chapters explaining in detail why they chose to write.
Of course, I am powerless to prevent the thousands of robust systems from collecting every article I have written. My problem is that I wish this were not the case. Why? I put a lot of effort into these articles with a not-so-hidden attempt to promote my books. (And this is budget therapy, but that is another topic.)
I want to yell to the AI companies: “Do your own work! Stop stealing mine!” Yet, you might point out, I have not copyrighted the very words you are reading. Meaning that anyone is free to read them, print them, email them, or consider them their own. In reading these words, I probably would not care. But when AI uses my thoughts? I do care.
The linked article above is a call to arms to prevent AI from claiming the best of humanity as its own. But if you have read this far, you might have learned something. AI is doing the same thing: learning. That’s fair. Right?
It is, but no. It feels like somebody is cheating. I cannot instantly become an expert on spinach, yet AI can. Oh well, it seems like I cannot do anything about it. Well, that is not true because I have an ace up my sleeve. You.
Why did you read this far? It was your tenacity. You were curious and, with some luck, I satisfied your interest. Meaning you may have learned something and perhaps had a touch of enjoyment. The ace up my sleeve is that to AI, this hand-crafted article was one of billions of files. Meaning that the few people who read this will have gained something special, and no AI model will ever appreciate what that was.

You’re the best -Bill
July 23, 2025
 •  0 comments  •  flag
Share on Twitter
Published on July 23, 2025 19:00 Tags: ai, authors-rights

AI Company To Pay Authors $1.5 Billion

A story recently broke about an AI company losing a court case and being forced to pay authors over a billion dollars.
https://arstechnica.com/tech-policy/2...
They were accused of stealing copyrighted works in a practice called AI scraping. This occurs when an AI company employs illegal or unethical methods to gather information, such as downloading books from a pirate site, bypassing website “are you really a human” tests, or hacking servers.
These companies then used the illegally/unethically obtained information to develop and improve their product. The result is an AI chatbot based on a broad foundation of millions of documents.
Three authors banded together to sue an AI company. The authors were victorious in court and have established a website that allows authors to report their own copyright infringement. Unfortunately, these authors are not eligible to receive any of the settlement money—bummer.
https://www.anthropiccopyrightsettlem...
This was a victory for authors, copyright holders, websites, and consumers. But there is a big elephant in the room. Hundreds of companies (and soon millions of individuals) are doing the same thing, and most are operating outside the bounds of justice. Shell companies, foreign countries, hacking groups, and organized crime. This is, therefore, a token victory, as the legal system cannot prevent AI scraping—double bummer.
The most remarkable aspect of this lawsuit is that usable AI technology was previously only found in science fiction. A good example is Rosey the robotic maid from the Jetsons animated television show. Then, one day, “humans” began calling us to discuss reverse mortgages. AI companies achieved this astounding feat by gathering a vast amount of information.
I guess this is the digital age where something bad can happen to millions of people and nobody notices. When someone takes action, there is a minimal response.
What does this mean for me? As a minor author and article writer, it is a minor victory, and I will savor it. Unless I magically get a million dollars to spend on my own class action lawsuit, I cannot do anything about the companies that are scraping these very words—triple bummer.

You’re the best -Bill
October 01, 2025
 •  0 comments  •  flag
Share on Twitter
Published on October 01, 2025 09:28 Tags: ai, ai-scraping, lawsuit, writing

More YouTube AI Junk

I enjoy learning about many subjects, including current events, technology, electronics, and history. However, these topics are complex, and I prefer to learn by reading the direct source because there is so much misinformation.
I also like to know what other people think, especially on topics that are open to opinion. Politics is one such topic, and for this, I enjoy YouTube commentary channels. This is when an expert analyzes a topic and presents their opinion.
One channel I follow is Zeihan on Geopolitics. He offers a global perspective on complex political/economic topics. I do not always agree with his conclusions, but I appreciate his balanced approach, thorough research, and insightful analysis.
https://www.youtube.com/@ZeihanonGeop...
And so went my life. Events happened, and I watched YouTube to get different viewpoints. Along the way, I learned more about history, what was happening around me, and technology. Two weeks ago, something changed.
A major political event occurred, and several channels shared their opinions. So, I watched a few to see the different takes. YouTube recognized my interest and recommended other channels that share views on the topic. I had not subscribed to these other channels, but I do occasionally click on them for additional insight. What I noticed was a massive uptick in recommendations. The channels all had on-point content and mirrored what my subscribed channels were presenting.
Their formats were identical—a focused title, careful analysis, and stock photos (or news photos of the event). The only difference from my subscribed channels was the use of a computer-generated voice. Now, I know that some presenters may not speak English well or be shy, but they are still bright individuals. Thus, I do watch a few channels with computer-generated voices, so this was not unusual.
Yet, my spider sense was telling me there was a problem. And then it hit me—the words. I have become skilled at identifying AI-generated content, which often features long-winded descriptions, flawless grammar, and formal speech.
“I did not do that.” He was reported by the Guardian newspaper, based in London, England, to have said.
What is wrong with AI-generated content? One could say that they did me a favor. AI summarized a story with excellent visual aids. Thanks for the great quality! Umm, no.
The problem is that I wanted an intelligent opinion or genuine insight. “I think A did B because of C. Yes, X is a problem, but look at Y and Z.” Meaning, I wanted a genuine human analysis. IE, something new, as opposed to a summary of other opinions and existing information. And there was another problem.
AI is a mindless tool. It does not know what a misrepresentation, omission, lie or bias is. Plus, there are fundamental flaws. “One orange plus one apple equals three grapes.” My mind has enough misinformation without AI-generated junk.
Once I realized these were AI-created channels, I blocked them. I also sent a request to YouTube to label their content as AI. (I did not put comments on the videos, because it is possible that I was incorrect, and the world has enough negative opinions based on bonkers people like me.)
The recommendations went from a trickle to a flood, which inspired a new rule. I blocked all new channels with a computer-generated voice. This resulted in fewer suggested AI channels because I had exhausted all the ones relevant to my interests. Nice.
A week passed, and a new channel popped up. It was all about World War II radio and radar technology. Seemed interesting, so I began watching. The voice had an English accent, but there was something off. Long-winded descriptions… Yes, this was AI-generated, but the content was excellent.
Even though I was upset by being duped, I watched more, paused the video, and checked the facts. They were close to historical records, but there were glaring flaws. Again, I blocked the channel and requested that YouTube declare it to be AI-generated with errors.
The next day, there were over ten World War II-themed recommendations, all of which looked similar. So, I started a new trend. If the channel did not have a visible person presenting, I blocked it.
Well, you know what happened next. A suggestion popped up with a narrator. The voice was clearly computer-generated, but the person looked real, which leads to a big problem. Soon, I will no longer be able to distinguish what is AI-generated.
When will this occur? Look no further than this excellent AI-generated Star Trek parody video:
https://www.youtube.com/watch?v=1eqYs...
Videos like this have forced me to up my threshold. If a channel I am not subscribed to appears in my feed and it looks even slightly suspicious, I will block it. And if that means I make a mistake, I am willing to accept blocking new creators with beneficial information.
Well, that is messed up, but it is not the first time that technology has wronged us. They invented filters for cigarettes, and non-biodegradable cigarette butts litter my beaches. Computers and piles of e-waste. Single-serving food and trash all over my neighborhood. YouTube and AI junk.
I guess that is modern life. Now all I need is a filter to block AI content automatically. Perhaps I can use AI for this.

You’re the best -Bill
November 05, 2025
 •  0 comments  •  flag
Share on Twitter
Published on November 05, 2025 20:52 Tags: ai, life-experiences