Gina Harris's Blog, page 7

June 11, 2025

Rejecting AI as much as possible

Going back to that original hope -- rejecting Artificial Intelligence as much as possible -- what does that mean?

I am becoming more aware of the difficulty of opting out.

I had stopped using Google -- and told them about it -- because of their acquiescence on renaming the Gulf of Mexico. I'd also noticed that their search engine was no longer as helpful, but I was thinking of that part of a general downhill trend. 

https://sporkful.blogspot.com/2025/02/corporate-communications.html 

I have been using Bing. 

Bing is not that different from Google; the different ways of grouping results and such are all kind of following the same pattern. That pattern includes getting AI results at the top.

I felt very good about scrolling right past those and going on to actual articles and entities and web pages. 

That is reasonable for seeking better sources to avoid the replication of errors. It is good for valuing the creators of content... valuing humans.

Those things are important to me, but so is not killing the planet. If Copilot (Microsoft's AI tool) is running whether I am using the results or not, there is still damage being done.

I thought of this because I saw some complaints from people about not being able to opt out. That was a reasonable concern, but I wasn't sure if it was true.

That is a more complicated question.

I was able to find directions for turning Copilot off in Bing and in Windows, except that the Bing instructions didn't work. For Windows, if you don't have Professional it involves a registry edit, which I have not completely ruled out but it's a little intimidating.

I did submit feedback on it.

I am also going to check out some other search engines. It's very possible that they are all that way, but it's at least worth looking.

It's not perfect. 

Still, there is so much that can be done.

One thing that has become more clear to me is that I need to make the rejections more clear.

It is not just that I am not going to use AI applications to see what I would look like as an elf or as people in four different decades, or that I won't click on those romance novels with AI covers (I wasn't going to anyway), but I will delete them from my feed.

When someone puts up a picture of Muppets or Simpsons characters or anyone else in front of something culturally relevant, I am deleting that from my feed.

If they are using AI, I am going to keep voting "No" and hope that more posts by actual humans show up in my feed.

If I can't be as thorough and effective as I would like, I will make up for it by being exceedingly stubborn. 

 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2025 16:23

June 10, 2025

Playing nice when you have to

Now that I have spent six posts on how terrible AI is, there is something else to note:

Among the people that I love, there is one going back to school to study AI so he does not become obsolete, and another being advised that if you don't learn AI you will be replaced by someone who does. One is in the tech sector, but the other is in banking.

Even in my course of study, when we were learning about technology there were lots of good things said about AI.

I will note here that I have seen discussions about the difference between machine learning and generative AI. My criticisms have been primarily focused on generative AI. 

When we are looking at environmental damage, I believe we need to consider both. However, I am not trying to get you fired.

We live in a vastly imperfect society that just keeps getting worse. There may be times when it's right to martyr yourself, but often the right thing is to survive; that might require some compromises.

How do we ethically work in this world?

First of all, knowledge makes sense. It makes sense to know how others in your specific job and in your field might use artificial intelligence and what the perceived benefits are. 

I hope, though, that one result of these posts can be also being aware of the downsides and potential flaws in using AI.

It replicates errors. It perpetuates bias. It is killing the planet.

Maybe you can be the one who asks -- in a company that has at least pretended a commitment to the environment -- how to compensate for the extra energy use.

Maybe you can be the one who encourages scrupulous proofing and checking of everything that gets generated via AI.

Maybe you can raise the questions.

And maybe that will get you in trouble. There will be people who are so high on the rush of technology and job elimination that some of this can be dangerous. You will have to use your best judgment.

One thing I believe, though, is that it is better to know and understand more. 

Take information and do good things with it, as best as you can.

Related posts:  

https://sporkful.blogspot.com/2025/05/the-scuffle.html 

https://sporkful.blogspot.com/2025/05/for-arts-sake.html 

https://sporkful.blogspot.com/2025/05/garbage-in.html 

https://sporkful.blogspot.com/2025/06/ai-lies.html

https://sporkful.blogspot.com/2025/06/ais-human-cost.html 

https://sporkful.blogspot.com/2025/06/reasonable-questions.html 

 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2025 17:12

June 6, 2025

In my garden: May's daily songs

I didn't have any clear ideas for songs for Asian-American Pacific Islander Heritage Month, though I did do daily articles for it.

I had been thinking about doing a month themed with songs about flowers and fruits and vegetables for a while, and that's what I decided on.

Does it reflect my gardening hopes well? Not particularly. There are things in there that I would not grow. A lot of them are wild or they tend to grow in different climates.

It was still fun looking, and I found new songs from familiar artists.

I will also add that you can easily do a month of just "rose" songs. My desire to not be too repetitive meant I only did three: "Monarchy of Roses", "Kiss From A Rose", and "Every Rose Has Its Thorn".

"Monarcy of Roses" was one of my favorite new ones, along with "(Nothing But) Flowers". 

There were still repeats. I have definitely used "Build Me Up Buttercup" and "Love Grows (Where My Rosemary Grows)" before. I think I have used "Green Onions" at least twice before, and I will surely use it again. I love those funky onions.

I hope to be planting soon. 

Think green thoughts!

Daily songs

5/1 “Waltz of the Flowers” by Tchaikovsky, performed by London Symphony Orchestra
5/2 “San Francisco (Be Sure to Wear Flowers In Your Hair” by Scott McKenzie
5/3 “Where Have All the Flowers Gone” by The Kingston Trio
5/4 “Scarborough Fair” by Simon & Garfunkel
5/5 “Edelweiss” from The Sound of Music
5/6 “Forget Me Nots” by Patrice Rushen
5/7 “Wildflower” by Skylark
5/8 “Build Me Up Buttercup” by The Foundations 
5/9 “Love Grows (Where My Rosemary Goes)” by Edison Lighthouse
5/10 “Poison Ivy” by The Coasters
5/11 “Blueberry Hill” by Fats Domino
5/12 “Vegetables” by The Beach Boys
5/13 “Green Onions” by Booker T. and the MGs
5/14 “Listen to the Flower People” by Spinal Tap
5/15 “Tangerine” by Led Zeppelin
5/16 “Every Rose Has Its Thorn” by Poison
5/17 “Fading Like a Flower” by Roxette
5/18 “Kiss From A Rose” by Seal
5/19 “Peaches” by The Presidents of the United States of America
5/20 “Monarchy of Roses” by Red Hot Chili Peppers
5/21 “Lotus Flower” by Radiohead
5/22 “Pineapple Head” by Crowded House
5/23 “Oranges on Appletrees” by A-ha
5/24 “Sunflower” by Vampire Weekend
5/25 “Watermelon Man” by Herbie Hancock
5/26 “Bleeding the Orchid” by Smashing Pumpkins
5/27 “Amaryllis” by Shinedown
5/28 “Wildflowers” by Tom Petty and the Heartbreakers
5/29 “Tulips” by Bloc Party
5/30 “(Nothing But) Flowers” by Talking Heads
5/31 “The Garden Song” by John Denver

 •  0 comments  •  flag
Share on Twitter
Published on June 06, 2025 12:31

June 5, 2025

Reasonable questions

I remember a time when the business world was looking for English majors. I also remember reading an argument once that there should only be essay tests for English majors, because writing ability would not necessarily be important for other applications.

I'm not saying that these mindsets were close together.

As it is, it is not uncommon that regardless of how much you know about math and science, some of it may not be very useful without the ability to communicate it to others. 

I have been thinking about those things because of artificial intelligence, of course, where Grammarly and ChatGPT and automatic suggestions in word processing programs are all trying to guess and shape what you say.

However, I have also been thinking of it because of my schoolwork. One of the things I have studied has been Universal Design for Learning:

https://udlguidelines.cast.org/ 

One of its recommendations is to have multiple means of expression. If students have the option of reporting their research in not just a written essay, but perhaps in a slideshow or a video presentation, that may help more students to convey their learning. If what you want to know is that they understand the human digestive system -- not their ability to follow the standard five paragraph format -- then the essay may hold back some students who understand the digestive system really well.

That doesn't mean that things like vocabulary and expository ability aren't important, but maybe they don't need to come up every single time in every single class. There has to be some kind of balance.

Personally, I find the word suggestions annoying. If I don't know what I want to say, the program is unlikely to guess correctly for me. I don't mind the automatic spelling check. Typos happen.  

Expressing my thoughts and spelling are also both things that come easily to me, which I know affects my thinking on the issue.

I have found some of my school assignments very difficult to get started. Help might be more desirable there, except that in the struggle I do learn more about it. 

I am in school for the purpose of learning. 

There are people who don't feel that way. Schools put measures in place to try and prevent cheating and encourage original work, but sometimes it is hard to feel confident.

One concern I have is if we are getting a populace that won't value or desire expertise. There are some signs.

Educators can and are working on what better defining what the learning goals are, how to effectively accomplish them, and assessments to know whether they have been successful. That will help, but if too many people don't care, then what?

We have to decide on values and then stick to them. There is room for disagreement.

There is one area where I kind of feel ridiculous but am adhering to it anyway.

Since getting on Facebook, I have been very conscientious about wishing people a happy birthday; the reminder is right there, and if I am seeing it we have agreed to friendship, at least in the social media sense.

Some time ago, Facebook started automatically populating the birthday wish, giving a few additional options in case you didn't like the main one. There are always little emojis too.

I am erasing that every time and doing my own birthday wish. 

It is a less grammatically correct one, because Facebook always puts the comma before the name. I know that's correct, but it doesn't feel natural to me so I had not been doing it. (That's assuming I use the name, because if you are the age of my parents or I used to call you a nickname and now you are going by your full name... there are some neuroses at play, I know.)

Spending that extra time so that your birthday wish is less fancy is part of me being me. I will continue to do so. Even if I accidentally hit "Enter" I will go to your page and edit it. That's the kind of weirdo I am.

That is one way I stay human. 

 •  0 comments  •  flag
Share on Twitter
Published on June 05, 2025 15:10

June 4, 2025

AI's human cost

I have referred multiple times to this trend where we don't value people, but without talking about what valuing people means.

I obviously mean valuing individuals and their welfare, but some of these stories have been making me think of the value of humanity collectively, even with (or especially with) all of our flaws.

I had to search a bit for two articles because they irritated me so much that I didn't save a link.

https://www.vox.com/future-perfect/384517/shannon-vallor-data-ai-philosophy-ethics-technology-edinburgh-future-perfect-50 

This one was less frustrating than the other. Shannon Vallor discusses transhumanism and the tendency to elevate technology, like maybe AI can come up with something more moral than us, while disputing those hopes.

I sympathize with frustration with human choices. I also know that the human flaws get replicated by artificial intelligence. That replication may not bring along sympathy and sentiment, areas in which humans still frequently come through (though perhaps less so with the humans having the largest influence on technology).

I couldn't find the article I absolutely hated, but another writer's reaction is here:

https://siobhanbrier.com/932/review-of-confessions-of-a-viral-ai-writer/ 

The original piece was about Vahini Vara using ChatGPT to write about her sister's death. This included ChatGPT telling her a memory of something that never happened, but that Vara wished had happened.

I have not read the original piece, but in the linked article Siobhan Brier has; she found herself skipping the ChatGPT parts, though Vara expressed her preference for those.

I see some sense in that. Brier was looking for the human and did not find it in ChatGPT. Vara felt like she was finding something better than human, perhaps, but I think there were two important factors with that.

Obviously Vara was already more aware of her own words and feelings and was looking for something new. In addition, it was very clear that she had not worked out her feelings about her sister's death; the reason she used ChatGPT was that she could not write about it. In that way, perhaps it functioned as a type of therapy, helping her to get unstuck.  

It is not unheard of for therapy to go badly because the therapist has an idea in their head -- whether from their training or their own experience -- where they are not helping you in the way you need.

Their training could still help them realize when that is happening.

I know we are in an imperfect world, but I can't help but think that Vara might have done better talking to a friend or a someone in a support group or a family member or just writing on her own, taking it down the paths that she needed to follow. It might not have produced something ready for publication, but is something where readers keep wanting to skip the ChatGPT parts really "ready" for publication?

There can be struggles in getting through writing on your own, I know, but there is strength to be found in the struggling that I don't know that AI can provide.

Then, for those who are struggling with human relationships (possibly needing some maturity and development), is customizing a companion the best option there? Will they do better in a world where they can  -- instead of learning about respect and mutual regard with living beings --go for the "ultimate personalized girlfriend experience" ?

I will not link to that, but here's the story of a guy who created his own AI board members, immediately hit on own, then had her tell him it was okay:

https://futurism.com/investor-ai-employee-sexually-harasses-it 

With kindness and grace for each other, we can be beautiful in our imperfections, and create beauty.

That's what I hope to see.

That is going to require something more genuine than AI can provide. 

But for more signs of bad ideas and opportunities for abuse:

https://www.nbcnews.com/tech/tech-news/ai-candidate-running-parliament-uk-says-ai-can-humanize-politics-rcna156991 

AI Steve did lose the election.

https://theconversation.com/ai-scam-calls-imitating-familiar-voices-are-a-growing-problem-heres-how-they-work-208221 

 •  0 comments  •  flag
Share on Twitter
Published on June 04, 2025 14:14

June 3, 2025

AI lies

“The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed.” -- Hannah Arendt

Sadly, I am not sure that the constant falsehoods regurgitated by artificial intelligence are even deliberate. I think a lot of them are just the normal failures of technology, exacerbated by the landscape in which it came to be. 

The damage is the same, though, and it doesn't have to be this way.

Let's look at some examples.

https://arstechnica.com/tech-policy/2025/05/judge-initially-fooled-by-fake-ai-citations-nearly-put-them-in-a-ruling/

A lawyer used AI to generate a legal brief for a case. Nine of the twenty-seven citations had errors, including two that simply didn't exist.

The judge found it pretty convincing, but still did his own research and discovered the... well, fraud implies a level of intent that I don't think it was there. I suspect the reason for the use of AI was simply laziness, but it's still not a good justification.

Estimates are that chatbots have AI hallucinations as often as 27% of the time, and errors up to 46% of the time.

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) 

It does make sense that laziness would result in shoddy work.

Of course, looking up enough legal cases to come up with 27 citations for a single brief does sound tedious, but might this happen in other areas too?

Why, yes.

https://www.sciencebase.com/science-blog/vegetative-electron-microscopy.html 

This article may give us the source for one of those hallucinations. An old paper had the words "vegetative" (in reference to cells) and "electron microscopy" in parallel columns and they were put together. 

"Vegetative electron microscopy" is not a thing, but now it is getting cited a lot. 

I really liked this quote (in the current article about AI, not the old one about vegetative cells):

... in a world where scientific endeavour is being derailed by moronic politicians and their henchmen, we need a stronger science base, not one polluted with such nonsense as vegetative electron microscopy. It leads to distrust in scientists and in science, it gives those who peddle pseudoscience, disinformation, misinformation, and fake, greater leverage to shake off the facts and replace them with ill-informed, politically-driven opinion. 

We need human understanding and diligent minds. 

This technology is not going to solve climate change. Even if you can use AI to run simulations and save time that way, you need a coherent mind with innovative thoughts setting it up.

If the past few years have shown us anything, it's that some people will bite at any false information that supports what they want to believe. This is a trend that doesn't need any help, but it's getting it.

https://www.axios.com/2025/05/23/google-ai-videos-veo-3 

"Google's new AI video tool floods internet with real-looking clips"

All for fun, right? 

One more thing:

https://www.cbsnews.com/news/sextortion-generative-ai-scam-elijah-heacock-take-it-down-act 

They didn't even have real images of 16 year old Elijah Heacock. He still took the threats seriously, and he still killed himself.

One interesting thing in this article is that it mentions legislation supported by Trump to reduce sextortion. What about that provision in the "big beautiful bill" prohibiting regulation on AI for ten years? 

Again, that is a vector from which I do not expect coherent thought. But here, among us, we can think about this and we need to think about it. 

 •  0 comments  •  flag
Share on Twitter
Published on June 03, 2025 12:40

May 30, 2025

Must Read: Copaganda by Alec Karakatsanis

The next post on AI is going to be about how it can embed false information in our collective consciousness. AI is not the only source of that problem.

In Copaganda: How the Police and Media Manipulate Our News by Alec Karakatsanis, one aspect of that issue is explored in illuminating ways.

There are many issues that can be worth exploring relating to modern policing and punishment. Karakatsanis touches on some of those, but the book has a focused mission, specific to reporting and how it shapes our conception.

I started following Karakatsanis about the time of the train thefts that are featured in the book, so there were things that were familiar.

There were also things that were unexpected; maybe you know that police have public relations people on staff, but could still be astonished by how many people and at what cost. That expense alone may be a reason why somehow every problem -- whether with crime or with police corruption and brutality -- somehow requires more money spent on police. 

After Derek Chauvin's murder of  George Floyd, "Defund the Police" became a slogan, though one that was not engaged with fairly.

Part of the reason for that is a general belief that even if there are problems with the police, there is no other way of dealing with crime. Much of that comes through the efforts of those PR specialists and their work with media.

In addition, (and going along with my obsession with dominator culture) I will say that the idea that the only thing we can do in response to perceived danger is to crack down and control is something that appeals to that mindset. 

Yes, the majority of the people who voted for this current administration embrace dominator culture pretty passionately already, but it is easy for even people who want something better to still not quite believe that it is possible.  

They take away imagination and hope. 

We have not arrived at that belief organically; we have been taught it with persistent reinforcement.

Those lessons are full of lies. 

Even when gross misrepresentations draw enough criticism to warrant a retraction, somehow the retraction isn't quite honest either.

That is demonstrated over and over again.

One of the most personal moments came when the book was covering two reporters who frequently misrepresent stories in favor of copaganda. It started to sound very familiar. I was going to look up one of them with a gut feeling that he might have been the one who kept arguing how badly Oregon's decriminalization of drugs had gone.

I didn't need to look it up; it was there on the next page. It was him and he did lie.

I know smart people who believed it.

(Also, while the New York Times being themselves came up a lot, the "best" was a citation of a WWII internee praising the experience.) 

A common tactic is stating things as fact without backup, like calling something popular unpopular.

For every oppressive idea there will be people loudly defending it, perhaps trying to compensate for how many people disagree.

It is not always easy to know that you are right or how many people are with you. That is a problem reinforced by the people in the best position to solve it.

I don't have a solution for that, but this book can help. It goes over one problem clearly, helps you know what to look for, and then provides resources.

It's a start. 

 •  0 comments  •  flag
Share on Twitter
Published on May 30, 2025 13:41

May 29, 2025

Garbage in...

There are things to say about the environmental impact of artificial intelligence and things to say about how it contributes to bad information, but first I want to point something out that relates to both.

First of all, while I do not personally use Grok, I do often see other people asking it for more information. One of the delightful things about that is that Grok often refutes right wing talking points.

There is one area where it is disappointing, in that when asked about environmental damage from AI -- and this is not just Grok, because it has to pull that information from somewhere -- the standard response is that while there is a toll, AI can help find solutions to combat climate change.

You can find projects where it says AI is helping, though there doesn't seem to be a big return on investment yet.

"Yet" is a key word.

So let's go to another recent exchange with a human, though not a great one.

https://www.space.com/astronomy/mars/eventually-all-life-on-earth-will-be-destroyed-by-the-sun-elon-musk-explains-his-drive-to-colonize-mars  

Elon Musk has explained his interest in settling Mars because eventually the sun is going to destroy all life on Earth.

Now, there were people mocking the quote out of context, thinking that he was talking about the sun eventually burning out, which will be a problem for life on Earth but would present the same problem for life on Mars.

What he was apparently referencing was the sun getting bigger until it is burning Earth. So this is something he projects for 450 million years away, as opposed to the estimated point 5 billion years away when the sun runs out of hydrogen and then expands, possibly only eating up Mercury, Venus, and Earth, but potentially also pulling in Mars. Which can potentially make it sound like he is talking about a real thing, but I don't think he has it right.

My point is that with that time scale, I don't think it matters. Given the damage that is being done and the rate at which it is being done, there will be too much destruction along the path to whatever hypothetical answers could be found.

We know enough to make things better now, and it will be done through people, not the pipe dreams of people who think they are smart because they were born rich and raised spoiled. 

So I leave you with Musk's AI company belching methane into an area that already leads in asthma hospitalizations without regulation, because it is "temporary":

https://www.politico.com/news/2025/05/06/elon-musk-xai-memphis-gas-turbines-air-pollution-permits-00317582 

Well, given the environmental damage in this case, perhaps I should not really leave before also mentioning the proposed ten-year moratorium on state regulation of AI:

https://thehill.com/policy/technology/5314757-house-republicans-propose-ai-regulation-ban/ 

 •  0 comments  •  flag
Share on Twitter
Published on May 29, 2025 14:56

May 28, 2025

For art's sake

I want to go back and unpack the first irritated response:

I’m not stealing from artists. I’m having fun in a way that nobody’s losing a job. People using technology to eliminate jobs is a different story. Is photoshopping bad, too?You shouldn’t question my ‘goodness’, for simply using AI in a victimless manner.

AI "art" absolutely does steal from other artists, though not always obviously. 

The first AI comic I saw was using rendered images of Zendaya. A romance writer aggressively promoting her books on Facebook recently had a merman on the cover that was totally Jason Momoa. 

If you felt like the pictures of you in different decades or cowboy you didn't really look like you, that may just mean that there was not a close match in the images that were being harvested. 

Okay, photos of famous people get used; so what?

There's a ton of ethical issues in that question that I am not getting into today, focusing on the production itself. 

There is a good chance that these are not photos of the actual actors, but artwork that other people did of them. Those images are then harvested from somewhere like DeviantArt. 

Were they going to earn money for turning Jason Momoa into a merman? Maybe not, and maybe not enough to live on, but the artwork can be monetized. Some professional artists do use art sites as one point of sale, and then find their images stolen. That used to mainly happen with T-shirts, but there are many other options for stealing now.

As it is, there are repositories of images out there that can be used with the artists' (including photographers) permission, including https://creativecommons.org/.

Part of that inclusion is that it specifies what uses are permitted and under what conditions. Maybe you can use it without fee as long as your use is non-commercial but it requires attribution. Therefore, if someone else sees it and wants to use it commercially, they know whom to ask for permission. 

Something you are doing for fun might affect someone else's ability to be compensated for work. Should that matter for art, which we do for love?

There are some problems if we don't.

One overarching principle that I will keep coming back to is this big circle of devaluing people. 

Yes, you do have studio heads (who make lots of money) questioning the value of writers and artists and even actors because they no longer believe it's necessary. It is easy to draw the connection to job elimination there, as well as the decline in innovation and quality of the entertainment.

It may be harder to connect when whipping up an AI image for the cover of your romance series where lords of the sea find love and passion with human women who have had a hard time with land-dwelling men. However, it's not that no one provided work for the covers; it's that you are using that work without any recognition or caring for that contribution.

Now, I have seen some pretty cheesy romance novel covers done with Photoshop, but yes, from an ethical standpoint I would have to say that was better.

I have self-published novels. None of the covers are great, but I didn't steal. That's worth something to me.

Besides, there are artists out there who might help you for a small amount, thus helping both of you. That's something that can happen but is becoming progressively easier not to think about. 

When AI keeps stealing (they sometimes dismiss it as "theft from the commons" which opens up a whole new can of worms about how we got to the problems that we have today), there is someone making a profit. 

Maybe it's not you, but are you still abetting it?

 •  0 comments  •  flag
Share on Twitter
Published on May 28, 2025 13:29

May 27, 2025

The scuffle

Last July I saw a tweet from an improv comic I liked, encouraging his followers to use AI to make fake movie posters.

I thought I was being gentle -- maybe even a little flattering -- in my response:

AI generation used vast amounts of energy, contributing to climate change, and steals from actual artists. I think you are too good a person to encourage that. 

That was not appreciated.

I’m not stealing from artists. I’m having fun in a way that nobody’s losing a job. People using technology to eliminate jobs is a different story. Is photoshopping bad, too?You shouldn’t question my ‘goodness’, for simply using AI in a victimless manner.

Now, I was not able to find all of the exchange. I can see that his original post was deleted, and I could not find the one where he suggested that I unfollow him instead of ruining everyone's fun. 

If he deleted that, it would still not have deleted my replies, so I am not sure what happened, except that it has been several months.

Anyway, I am going to share the posts that I could find. If it were not clear, I am going to spend a few posts preaching against AI.

I do remember replying with an article on the Nizhoni issue and one on the climate destruction. I don't see the reply that obviously what the Dutch company was doing was wrong, but that what he was asking for wasn't anything like that, because it wasn't commercial. 

Then, I guess to address the climate change issue...

You might also want to stop eating almonds, avocados, and toss out anything you own that was manufactured under untenable and underpaid overseas working conditions. Then, I will stop making silly pictures on my iPad.   (Then there were emojis for peace and love, but I don't believe they were sincere.) 
I did reply to that.

I don't eat almonds, not because I think completely ethical consumption is possible under capitalism, but because I can still try and do better. Almonds overtax water and pollinators, so I don't use them. 
Not included in my reply is that I completely avoid avocados because I think they are disgusting. 
Of course, that could have looked like I was avoiding that issue because I was a hypocrite on something I really liked. In fact, I am. I tried giving up ramen because they generally involve palm oil. Palm oil is not harvested responsibly which damages orangutan habitat. I backslid on that one. Regardless, I do try and be more conscientious, believing that my actions might make a difference and not wanting to support destruction.  I don't do it perfectly, which is probably not possible at this time, but I try. He was using it as a "gotcha" but it didn't really work; that's when he suggested I quit following him, which I did. I did not see that the original post was deleted until I was trying to find the thread for this post, so I don't know when it happened. Maybe he did change his mind, despite being irritated. AI is terrible for the environment, artists, law... people in general, really. More on that later. With the post on cryptocurrency, there were indicators that people might get involved with it for reasons of greed; maybe hoping to get rich quickly, maybe hoping to defraud lots of people. There was probably still an element where for some people it was new and different, therefore exciting. There is greed involved in the pushing of AI, but the big problem is the people for whom it just seems fun, interesting, and cool. My goal over the next few posts is to make you hate AI as much as I do. 
Without knowing who will read this, it is highly possible that you have used AI to see how you would look in different decades or get questions answered or even to get work done. I am not saying that makes you a bad person. I could be saying that as you become more aware of its harmful effects, that your desire for good will make you feel like you should stop. At least consider it. That only leave the obvious question of "who", and I feel a little bad about that, especially seeing that the post was deleted.  "Improv comic" could easily direct people to Whose Line is it Anyway? and then it seems even more important to identify, because there are many of them who have been so consistently fair and good-hearted in their interactions and the causes they support, that I would not want anyone to think it was them. But hating Trump doesn't automatically mean that you will hate AI; people aren't even framing the issue in the right terms. What else can I do but multiple blog posts?
 •  0 comments  •  flag
Share on Twitter
Published on May 27, 2025 10:24