More YouTube AI Junk

I enjoy learning about many subjects, including current events, technology, electronics, and history. However, these topics are complex, and I prefer to learn by reading the direct source because there is so much misinformation.
I also like to know what other people think, especially on topics that are open to opinion. Politics is one such topic, and for this, I enjoy YouTube commentary channels. This is when an expert analyzes a topic and presents their opinion.
One channel I follow is Zeihan on Geopolitics. He offers a global perspective on complex political/economic topics. I do not always agree with his conclusions, but I appreciate his balanced approach, thorough research, and insightful analysis.
https://www.youtube.com/@ZeihanonGeop...
And so went my life. Events happened, and I watched YouTube to get different viewpoints. Along the way, I learned more about history, what was happening around me, and technology. Two weeks ago, something changed.
A major political event occurred, and several channels shared their opinions. So, I watched a few to see the different takes. YouTube recognized my interest and recommended other channels that share views on the topic. I had not subscribed to these other channels, but I do occasionally click on them for additional insight. What I noticed was a massive uptick in recommendations. The channels all had on-point content and mirrored what my subscribed channels were presenting.
Their formats were identical—a focused title, careful analysis, and stock photos (or news photos of the event). The only difference from my subscribed channels was the use of a computer-generated voice. Now, I know that some presenters may not speak English well or be shy, but they are still bright individuals. Thus, I do watch a few channels with computer-generated voices, so this was not unusual.
Yet, my spider sense was telling me there was a problem. And then it hit me—the words. I have become skilled at identifying AI-generated content, which often features long-winded descriptions, flawless grammar, and formal speech.
“I did not do that.” He was reported by the Guardian newspaper, based in London, England, to have said.
What is wrong with AI-generated content? One could say that they did me a favor. AI summarized a story with excellent visual aids. Thanks for the great quality! Umm, no.
The problem is that I wanted an intelligent opinion or genuine insight. “I think A did B because of C. Yes, X is a problem, but look at Y and Z.” Meaning, I wanted a genuine human analysis. IE, something new, as opposed to a summary of other opinions and existing information. And there was another problem.
AI is a mindless tool. It does not know what a misrepresentation, omission, lie or bias is. Plus, there are fundamental flaws. “One orange plus one apple equals three grapes.” My mind has enough misinformation without AI-generated junk.
Once I realized these were AI-created channels, I blocked them. I also sent a request to YouTube to label their content as AI. (I did not put comments on the videos, because it is possible that I was incorrect, and the world has enough negative opinions based on bonkers people like me.)
The recommendations went from a trickle to a flood, which inspired a new rule. I blocked all new channels with a computer-generated voice. This resulted in fewer suggested AI channels because I had exhausted all the ones relevant to my interests. Nice.
A week passed, and a new channel popped up. It was all about World War II radio and radar technology. Seemed interesting, so I began watching. The voice had an English accent, but there was something off. Long-winded descriptions… Yes, this was AI-generated, but the content was excellent.
Even though I was upset by being duped, I watched more, paused the video, and checked the facts. They were close to historical records, but there were glaring flaws. Again, I blocked the channel and requested that YouTube declare it to be AI-generated with errors.
The next day, there were over ten World War II-themed recommendations, all of which looked similar. So, I started a new trend. If the channel did not have a visible person presenting, I blocked it.
Well, you know what happened next. A suggestion popped up with a narrator. The voice was clearly computer-generated, but the person looked real, which leads to a big problem. Soon, I will no longer be able to distinguish what is AI-generated.
When will this occur? Look no further than this excellent AI-generated Star Trek parody video:
https://www.youtube.com/watch?v=1eqYs...
Videos like this have forced me to up my threshold. If a channel I am not subscribed to appears in my feed and it looks even slightly suspicious, I will block it. And if that means I make a mistake, I am willing to accept blocking new creators with beneficial information.
Well, that is messed up, but it is not the first time that technology has wronged us. They invented filters for cigarettes, and non-biodegradable cigarette butts litter my beaches. Computers and piles of e-waste. Single-serving food and trash all over my neighborhood. YouTube and AI junk.
I guess that is modern life. Now all I need is a filter to block AI content automatically. Perhaps I can use AI for this.

You’re the best -Bill
November 05, 2025
 •  0 comments  •  flag
Share on Twitter
Published on November 05, 2025 20:52 Tags: ai, life-experiences
No comments have been added yet.