Anybody else happen across this? It’s a post by a guy named Ed Zitron, whom I hadn’t heard of previously, as far as I can recall.
The post starts this way:
A week and a half ago, Goldman Sachs put out a 31-page-report (titled “Gen AI: Too Much Spend, Too Little Benefit?”) that includes some of the most damning literature on generative AI I’ve ever seen. And yes, that sound you hear is the slow deflation of the bubble I’ve been warning you about since March.
The report covers AI’s productivity benefits (which Goldman remarks are likely limited), AI’s returns (which are likely to be significantly more limited than anticipated), and AI’s power demands (which are likely so significant that utility companies will have to spend nearly 40% more in the next three years to keep up with the demand from hyperscalers like Google and Microsoft).
This report is so significant because Goldman Sachs, like any investment bank, does not care about anyone’s feelings unless doing so is profitable. It will gladly hype anything if it thinks it’ll make a buck. Back in May, it was claimed that AI (not just generative AI) was “showing very positive signs of eventually boosting GDP and productivity,” even though said report buried within it constant reminders that AI had yet to impact productivity growth, and states that only about 5% of companies report using generative AI in regular production.
For Goldman to suddenly turn on the AI movement suggests that it’s extremely anxious about the future of generative AI, with almost everybody agreeing on one core point: that the longer this tech takes to make people money, the more money it’s going to need to make.
Zitron goes on to take apart the whole idea that AI is the wave of the future, at least the near future. He’s certainly vehement. I don’t know enough about this subject to have an opinion, but vehemence and good writing will get you a long way with me, so I found this post persuasive.
What makes this interview – and really, this paper — so remarkable is how thoroughly and aggressively it attacks every bit of marketing collateral the AI movement has. [Economist Daron Acemoglu of MIT] specifically questions the belief that AI models will simply get more powerful as we throw more data and GPU capacity at them, and specifically ask a question: what does it mean to “double AI’s capabilities”? How does that actually make something like, say, a customer service rep better?
And this is a specific problem with the AI fantasists’ spiel. They heavily rely on the idea that not only will these large language models (LLMs) get more powerful, but that getting more powerful will somehow grant it the power to do…something. As Acemoglu says, “what does it mean to double AI’s capabilities?”
It’s a long post. Here’s the conclusion:
The reason I so agonizingly picked apart this report is that if Goldman Sachs is saying this, things are very, very bad. It also directly attacks the specific hype-tactics of AI fanatics — the sense that generative AI will create new jobs (it hasn’t in 18 months), the sense that costs will come down (they’re haven’t, and there doesn’t seem to be a path to them doing so in a way that matters), and that there’s incredible demand for these products (there isn’t, and there’s no path to it existing).
Even Goldman Sachs, when describing the efficiency benefits of AI, added that while it was able to create an AI that updated historical data in its company models more quickly than doing so manually, it cost six times as much to do so.
The remaining defense is also one of the most annoying — that OpenAI has something we don’t know about. A big, sexy, secret technology that will eternally break the bones of every hater.
Yet, I have a counterpoint: no it doesn’t.
… That’s my answer to all of this. There is no magic trick. There is no secret thing that Sam Altman is going to reveal to us in a few months that makes me eat crow, or some magical tool that Microsoft or Google “pops out that makes all of this worth it.
There isn’t. I’m telling you there isn’t.
This is the first time I’ve seen a post like this, though for all I know people have been pushing back for months, or all year.
It sounds to me like there’s a real chance that this time next year, we’re going to be basically in the same place we are right now: a whole lot of ridiculously bad pablum “content” will have replaced “content” written by people, a whole lot of students will be trying to cheat on their English papers, a whole lot of so-called professionals will be trying to cheat when writing up their so-called junk research, and AI will be useful for crunching huge data sets and probably not a lot else.
That’s an interesting prospect. Getting rid of fake content on the internet may be a real problem, but a lot of what is called “content” is already useless and awful, so … is that going to make a big difference? If people in general are less enamored with using generative AI to diagnose patients (where it may very well hallucinate) and in general are better at ignoring fake content, then that sounds like probably a good thing?
Beats me, but the linked article is perhaps worth reading.
Please Feel Free to Share:
The post Whoa, serious pushback against the future of AI appeared first on Rachel Neumeier.