What SHOULD Happen When You Become a Supplier Rather Than Consumer for AI?

A year ago I wrote our seminary’s policy for Generative Artificial Intelligence— which I will simply call AI here on out because I am lazy. The policy tries to seek a balance. How can one utilize the potential of AI to enhance the learning experience, and NOT allow it to become a substitute for learning?

So obviously the concern is one being a consumer of AI. It is actually a bit weird that I was asked to come up with a policy for AI since I really don’t use it. I have tried out some of the things one can do in terms of answers, articles, and images, but only as part of research for the policy. I haven’t used it for any other purpose. I am not claiming to be taking the moral high road on this— it is just my personal choice. And here I am not saying that there aren’t moral issues relating to the use of AI. Rather, I am saying that one CAN use AI in a way that is both moral/ethical and beneficial. I just don’t use it. Some day that may change. Who knows?

This website has been gradually getting more views. Last year grew markedly over the previous years, and this year is already trending higher. I find it a bit mystifying since (1) I don’t advertise my posts at all, (2) I don’t use images or SEO strategies that attract attention, and (3) my topics are interesting to me and few others. But… as I was looking at REFERRERS to my website I began noticing something. (Referrers are outside websites that are guiding people to my website.) There has been a growth of AI sites like ChatGPT that are guiding people to my website.

Is guiding the correct word? Probably not. Rather, material I am posting is being used by the AI program to guide output to someone asking a query. I am assuming (correctly?) that actually my website is being used far more than the numbers I am seeing. That is because, assuming again, that the numbers I am seeing are only of people who clicked on the reference of the AI response. That could be wrong. Perhaps the REFER is actually the AI Bot visiting my site. Not sure… doesn’t really matter.

What does matter is that AI is using my website to produce answers, and in some cases actual papers. How should I respond to this? Here are a few things.

Ambivalence. I created this website to express my ideas as my own public diary of sorts. The fact that more people are reading it (at least those who have similar interests) is nice, even if some of the readers are doing so indirectly. I could be offended because AI (Gen AI) is sort of a “plagiarism machine”—- but I have always been more interested in having impact in the world of ideas than I have in being recognized… or being paid (I don’t monetize this site). Where it DOES annoy me a bit is that if it is pulling from my website— essentially a personal weblog— that means it is probably drawing from a lot of other similar ones as well. Many are sketchy. That is part of the reason that in academics, referencing blogs is discouraged, or even forbidden. Responsibility in Research. If a person is reading one of my posts, I feel I can be a bit more opinionated since it is fair to assume that the reader is going into the read not in fully gullibility mode. If they are reading what I write with the “I Believe” button fully pressed, they probably should not be on the Internet. But AI doesn’t necessarily (yet) have that critical sense. Therefore, I have a responsibility to research better and write more carefully.Responsibility in Originality. As I said, I don’t use AI. However, in recent months there has been a growing concern about Model Collapse. What happens when AI queries AI that draws from sources that came from AI researched by AI from AI… continuing that cycle back further and further. I cannot prevent that from happening. However, I can control my little bit of things. I will do my own writing. It may not be better than what AI can produce (although I hope it is), but at least I am a non-AI supplier to AI programs.

If I remember right, the world of DUNE (Frank Herbert) involved a society that had rejected computer AI out of fear of what they could do. The Terminator movie franchise expressed that same fear but in a different way. Frederic Brown’s short story “Answer” succinctly expresses this fear. I have known some people who have interpreted the book of Revelation as “The Beast” having a supercomputer that ultimately can be used to control the lives of everyone on earth.

I cannot control the future, and I think there are enough messed up things clearly going on February 28, 2025 by human beings. I can influence people in tiny ways but I can’t control them or change them. AI is here and is not likely to go away. I cannot control it or change it. I can simply try to influence it in tiny ways… with the hope of have some tiny influence on the consumers of AI.

 •  0 comments  •  flag
Share on Twitter
Published on February 28, 2025 13:54
No comments have been added yet.