Bing Chat AI: Potentially Useful, Potentially Dangerous Apophenia
I was able to get the new Bing Chat AI, so I gave it a try.
Overall, my opinion about all the new AI tools is very low, much like my opinion of cryptocurrency and NFTs is quite low. I think that while the AI tools have some utility, it isn’t nearly as much as their proponents believe. I also think they’re massively overhyped.
There’s a bit of “Narcissus Effect”, I suspect – the tools just vomit the portions of their dataset indicated by the generative prompt to their users, who gaze in admiration at the reflection of their psyches. Or it’s a bit like the Israelites and the Golden Calf in the book of Exodus. Look at this beautiful thing we made! Surely it must be a god!
In the modern version of the Golden Calf, it’s look at this mathematical formula we made that brute forces probabilities! Surely it must be an artificial intelligence made in imitation of our own minds, and not an Infinite Crap Generator!
It’s not a golden calf and no one thinks generative AI is a god, but I think the same psychological mechanism is at play.
(If I’m feeling really snarky, I’d say that Web1.0 was static content, Web2.0 was user-generated content, and Web3.0 is the Scammers’ Paradise.)
In particular, my opinion about AI image generation remains highly negative – image AI generation is essentially copying the images it’s been “trained” on. I don’t even like the word “trained” to describe it, since it boils down to essentially a million photocopiers taking a million pieces from a million different images. The artificial “intelligence” involved is basically brute-force copying patterns of the images in its data set, and then when a user enters a prompt, it uses those copied patterns to spit out an allegedly new image. For example, if you go to any one of the AI image generators and use “Magic The Gathering Plains Card” as the prompt, the AI will dutifully produce a mishmash of every Magic The Gathering plains card its dataset scraped from the Internet, complete with weird symbols where the text would be because it’s essentially trying to copy every single card and produce an average of them. (You can see an example as the picture for this post.)
While I am dubious about AI text generation, I am slightly less dubious about it than image generation, because it tends to be really fancy autocomplete. Granted, I don’t think highly of it, but I don’t think it’s as ethically sleazy as AI image generation. And it’s possible these tools have a use I don’t see yet. I mean, I don’t like voice assistants at all, but I recognize they’ve been hugely helpful to people, especially during the pandemic – particularly elderly people and people with mobility/health concerns. Or while I am dubious about cryptocurrency, it can be very helpful to people who live in countries with weak financial regulation or authoritarian governments. That said, there are enormous problems with AI text generation, which have already been thoroughly explored by people smarter than I am.
With that long-winded introduction and somewhat cranky introduction out of the way, let’s get to Bing Chat!
When Microsoft started opening up it’s OpenAI-fueled Bing Chat, I decided to give it a try.
It’s possible that they might be on to the beginnings of a good idea. Possibly.
One of the big problems with the present form of the Internet is that search is dominated by Google, and Google derives most of its revenue from online ads. This has a distortion effect, which means that the top search results for many Google searches now is just a bunch of SEO-optimized ad farms. I’m sure we’ve all Googled for a recipe and ended up on a page with a billion ads and the recipe way at the bottom. Finding accurate information with a Google search has become harder and harder because a lot of very smart people have optimized Google search for maximum ad revenue, so the top results for any particular search are are often equally optimized for maximum search ad revenue.
Bing Chat, by contrast, is designed for questions. The way it works is you ask the chatbot a question, it searches for relevant results, and then summarizes them in a few tidy paragraphs. Then you can ask more refining questions to get better results. Every answer also contains hyperlinks indicating where the chatbot got its information for the answer.
This is more efficient than scrolling through page after page of search results, though you can see the weakness – the answers are only as good as the information it is drawing from, so it’s possible the AI could give you a neat and definitive answer full of absolute nonsense.
For example, I was talking with someone familiar with horse research, and she suggested a very specific question related to a very specific equine medical problem. Bing Chat ground out an answer that was authoritative-sounding but both very vague and entirely incorrect, which we suspected would happen because there simply isn’t very much medical research on this particular equine health problem and therefore nothing upon which Bing Chat could draw.
It’s also very bad at value judgments. I was talking about this with some education people, and they suggested that I ask if a very specific question – whether Montessori preschools or Waldorf preschools have better outcomes for child development. (I have no idea what that means, either.) When posed the question, the chatbot ended up providing a summary of both types of preschools. At a casual read, it seemed like it answered the question, but it totally didn’t. Which remains the biggest problem with generative text AI – it sounds authoritative and knowledgeable, but it isn’t at all – it’s just a fancy autocomplete stringing together words that are most statistically probable to be located together in a sentence.
So it’s possible that, in this particular instance, AI could improve search. Though I retain my overall negative opinion of AI.
I think the biggest danger for this kind of chatbot is apophenia – the human tendency of seeing patterns where none exist. (You can see this on Twitter all the time, where many people assume every news event is part of a Sinister Plot perpetrated by a cabal of all-powerful yet highly incompetent conspirators.) Because while typing, it feels like you’re chatting with an actual, well-organized person on the other end of the connection. It’s not, of course, it’s just an illusion created by the way the human mind works. We tend to anthropomorphize everything – our pets, our tools, the weather, and we so often see patterns where one simply doesn’t exist. For many people, this can be an intoxicating illusion. I can easily see people developing unhealthy relationships with this kind of chatbot, and accepting uncritically anything it tells them.
I’m susceptible to this anthropomorphizing as well – I know Bing Chat is just a mathematical formula, but I still use “please” and “thank you” while typing to it. (Or maybe I was just raised well.)
Despite those dangers, Bing Chat might actually be a useful implementation of AI.
Overall, however, my opinion of generative AI as a technology remains negative for 3 reasons – 1.) it doesn’t solve a a serious problem, 2.) the trivial problems it does solve are outweighed by the massive new problems it creates, and 3.) overall, it makes the world slightly worse. If one creates an Infinite Crap Generator, it’s reasonable to expect an increase in the overall level of crap.
Hopefully, all the corporations investing massive resources into AI generation will take a massive loss, and then the technology can be marginalized.
-JM