Aaron Ross Powell's Blog, page 2
February 25, 2025
Reign of the Competency Cosplayers
American politics is a disaster and it’s due, in large part, to the fact that the people in charge have no idea what they’re doing. They don’t know anything, have little interest in learning, but are confident in their ability to fix everything. The result looks to be about what you’d expect: instead of fixing, they’re breaking, and instead of acknowledging that they broke it, they’re insisting that’s just what “fixing” looks like.
One way to frame this is that Americans—or enough Americans to win elections—reject expertise. They don’t value knowledge much anymore, and don’t feel they should be led by experts because they’ve looked around, don’t like what they see, and blame it on the experts having led us astray. The solution, at least in their minds, is to replace experts with non-experts, which can’t be any worse, right?
There’s something to that story. Americans have rejected experts. Where the story goes wrong, though, is that they haven’t rejected expertise. Instead, they’ve shifted how they assess expertise and what counts as an expert. They’re getting the assessment very wrong—we are not ruled by experts but instead now by cranks and dullards—but they still care about what they imagine to be competent experts who are instead non-experts wearing the costume and adopting the mannerisms of experts.
And that’s what’s going on with America. The country’s been taken over by competency cosplayers.
How’d we get here?
If you’re angry and feeling righteous, the last thing you want is for someone to tell you you’re mistaken about whatever’s made you mad. Doubly so if the person telling you is, you’re convinced, the kind of guy who looks down on you, sneers at you, and belongs to a cultural group you believe is to blame for most of what’s wrong around you.
What happens, then, when knowledge and expertise about the topics that have provoked your anger, and that you’re emotionally invested in, is concentrated among those very people you least like? What happens when the expert consensus goes against your deeply held views, and rests primarily among people on the other side of the tribal, cultural, or partisan division?
One option is to accept that you might not agree with those guys on everything, but they do know more than you do, so maybe you should listen to them when they’re speaking from genuine expertise. Another option is to dig into the body of knowledge they possess, learn it thoroughly, and then assess, with your own newfound expertise, whether they’re correct.
But if you’re mad, and if those guys are in the out-group, neither’s terribly appealing. Becoming an expert yourself is a lot of work, and probably requires a great deal of education—and your political tribe doesn’t really value serious book learning, and certainly not the academies where it takes place, anyway. Deferring to their expertise means admitting you are wrong, and admitting they know more than you do, and that means giving them credit, and they’re the out-group.
Still, it’s nice to feel smart and in-the-know, and a good way to feel that is to have experts tell you you’re correct. If intelligent and competent people say, “Yeah, you were right all along, and those so-called experts on the other side are actually stupid, or morally corrupt, or ideologically blinded, or all three. While you are intelligent, morally worthy, and so thoroughly adept in logic and reason that ideology has no obscuring hold on you.”
What you do, then, is find your own experts. People who clearly know what they’re talking about, and are definitely competent in their fields, but who won’t challenge you, and will give you expert-endorsed permission to continue believing what those other guys tell you is inaccurate.
Except there’s a problem. How do you know these guys on your side actually are experts? How do you know they’re not snake oil salesmen taking advantage of your ignorance and motivated reasoning? To identify expertise, we’ve really only got two options. First, we can develop it ourselves, but that’s, like we said above, a lot of work. And even if you can do it in one domain, no one, not since maybe Aristotle, can be an expert in everything. So your second option is to trust the judgement of other experts and to put a thumb on the scale for expert consensus. If everyone who knows a lot about a topic believes one thing about that topic, it’s probably better, lacking expertise yourself, to go with their consensus than to assume the random heterodox thinker has it right. That’s not foolproof, of course. The consensus can be wrong, and heterodox thinkers have been proven right. But it’s a decent heuristic.
That second option is what most of us do most of the time. It’s a good bet. But, if the expert consensus is against you, which it is if you’re the sort of person we’re talking about, it’s not a satisfying bet, because it looks an awful lot like admitting error.
The way out of this trap is to find a critical mass of your own experts, who agree with each other—and agree with you. They don’t need to be actual experts, not in a way other actual experts would recognize. In fact, they can barely know more about the topic than you do. But they need to convincingly perform as experts in an “I’m not a doctor, but I play one on TV” sense. And they need to perform that “expertise” in concert with enough similarly situated performative experts that it can feel like there’s something of a consensus on their, and your, side.
They need the appearance of competence, too: They’ve found success in applying the things you believe to be true, and their success in applying them reflects in a flattering way on you. It’s not just that the guy on TV knows a lot about, say, business, and that what he’s saying about business maps on to what you believe about business yourself. It’s that he’s achieved something in business—he’s demonstrated a degree of competence—and this shows that if you were to enter into business, you’d have similar accomplishments. It’s success by proxy.
But here’s the thing: It doesn’t matter if his success is real. In fact, it’s unlikely to be, because he’s saying the things you believe, and you’re not an expert, and the real experts disagree with you. And, generally speaking, success, while it has a degree of luck, also demands expertise. If he were a real expert, he’d disagree with you, because the consensus among real experts is that you’re wrong.
Thus the “success” in success by proxy can only really be a simulacrum of success, one given form and believability not through real world achievements that would show genuine expertise, but instead through a constructed narrative about that “expert” put together by the lore-builders of your preferred media and information ecosystem. Further, it’s not just that this person seems to “know” things. Book learning’s not that important. What matters is that they appear clever in applying their knowledge, that they do things. That they have accomplishments that are the result of their expertise. They’re rich. They’re famous. It doesn’t matter how they got rich or famous, just that they are. Or that the lore of your media ecosystem tells you they are. They have, in your mind, competence. And you know this because so many people who believe the same things you do keep telling you they have competence. It’s not just that this person isn’t a doctor but plays one on TV, it’s that the TV is telling you the medical drama he appears in is actually a documentary.
I call these people competency cosplayers. They’ve taken over the commanding heights of politics. They’ll keep breaking things, because their competency is pretend. They’re performing a role for people who want to see people like themselves in those roles. Or, if not like themselves, because they know they’re not rich and they know they’re not experts in this stuff, people who like them. People on their team, in their tribe, and opposed by the guys on the other side in that consensus that is so unbearable, or stuffy, or snooty, and keeps saying you don’t know anything and should listen to those who do.
They want to play a character, and you want to believe they are that character, and everyone playing or wanting to believe is in a media bubble telling them all these guys are legit, and everyone who’s not these guys is a fraud or corrupt. All this is understandable, because if the people you don’t like are telling you you’re wrong, you don’t want to believe them, because that means the people you don’t like are right. But when the people you do believe take the reins of power, all the imagination and fancy costumes your tribe can bring to bear won’t paper over the fact that these guys aren’t competent, and they aren’t experts. They’re just cosplaying.
February 24, 2025
The GOP is now just grifters grifting each other
Speaker of the House Mike Johnson doesn’t know much about technology. To be fair, very few of his colleagues in Congress do either. Which is why Congress routinely misunderstands laws governing technology, or gets caught up in tech-focused moral panics.
But Mike Johnson is also the House of Representatives leader of a political party whose leader is Donald Trump, a man of no actual skill, and even less knowledge, except for an instinct for branding, and for triggering grievances in a way he can take advantage of for personal gain.
And Trump’s set the tone, from the top down. The GOP isn’t so much a political party anymore, not in the sense of having a unified ideology or policy agenda. Instead, it’s the political home for the kinds of people who have right-wing cultural preferences, yes, but also for people who fall for scams.Back to Mike Johnson. Here is is talking credulously about his party’s current leader, Elon Musk.
Johnson: "Elon's cracked the code. He's now inside these agencies. He's created these algorithms that are constantly crawling through the data & as he told me in his office, data doesn't lie. We're gonna be able to get the information. We're gonna be able to transform the way federal govt works."
— Aaron Rupar ([@atrupar.com](http://atrupar.com)) February 24, 2025 at 10:35 AM
[image or embed]
What stands out about this is how Mike Johnson, who at one time managed to secure a law degree from LSU, is all-in on buying his town a monorail.
Elon Musk has very clearly not cracked any code. He’s not a computer science guy, he’s not a data scientist. He’s a salesman. But Mike Johnson doesn’t know enough about computer science or data science to know that. What he does know—and this gets to the heart of the contemporary GOP—is that Elon Musk is telling him what he wants to hear: The federal bureaucracy is bad, needs to be destroyed, and there’s suppressed knowledge that only he has, but which aligns with Johnson’s far right prejudices, that will enable him to save the country from the forces of the woke left.
Of course, the federal government in fact is too big. It tries to do too much. And much of what it does, even those tasks appropriate for it, it could do more efficiently. But that’s not what DOGE is about. Instead, it’s about selling a narrative of an evil swamp out to wreck America, and its chaotic, unfocused, and unproductive destruction is actually a sophisticated plan to achieve better government.
It’s a story of right wing heterodox “knowledge they don’t want you to know” against corrupt conventional wisdom. But it’s a story told by people lacking any genuine expertise in the system they want to reform (or destroy), being told for and to people without any genuine expertise, and no desire to acquire any. And it’s a narrative that exists not to make America better, let alone great, but instead to serve the personal interests of Elon Musk.
It’s also the story of the GOP itself. There’s a reason crank wellness influencers migrated right. Or why COVID conspiracists, even those whose politics used to be liberal, turned reactionary. Or why Trump is able to pump-and-dump meme coins to fleece his supporters. The GOP has become the home for people who felt out of step with the mainstream consensus, which meant the expert consensus, and their response wasn’t to acquire expertise to find faults in that consensus (which there are plenty to find), but instead to simultaneously reject the value of expertise while also elevating to the role of infallible expert anyone who could, in their ignorant assessment, convincingly talk the talk of an expert while not challenging their preexisting beliefs and prejudices.
As I wrote on Bluesky:
Zero interest rates let some people in Silicon Valley luck into riches without much breadth or depth of knowledge, but because they became rich, they convinced themselves they are geniuses, and that their genius extends to everything, and thus their opinions about everything are informed by genius.
— Aaron Ross Powell ☸️ ([@aaronrosspowell.com](http://aaronrosspowell.com)) February 23, 2025 at 7:34 PM
This synced with the political culture Fox News led, which was realizing that a lot of people don't know much and lack a curious nature, still want to feel informed, but are hostile to being told they're wrong. So you get pundits to pretend expertise while not challenging the audiences wrong ideas.
— Aaron Ross Powell ☸️ ([@aaronrosspowell.com](http://aaronrosspowell.com)) February 23, 2025 at 7:37 PM
The result is the current Trump administration: Rich guys who aren't geniuses but think they know everything about everything, and TV cranks whose only skill is telling uninformed Americans on the right that they're in fact the most informed Americans. It's not a recipe for good government.
— Aaron Ross Powell ☸️ ([@aaronrosspowell.com](http://aaronrosspowell.com)) February 23, 2025 at 7:39 PM
What makes the GOP’s circular con game circular is that everyone is simultaneously conning each other while being conned. Every new conspiracy theory or grift becomes part of the right-wing media ecosystem’s lore, and to signal membership in that lore to your audience (and the targets for your grifts), you need to believe all of it. Which means believing other people’s grifts, too.
This leads to an increasingly unhinged epistemic spiral. And it’s made worse by the fact that much of it is happening on social media, which has strange structural features that cut against correcting for bad information.
It’s not clear how people lost in this environment get out. A basic feature is that anyone who tells them they’re wrong, that they’ve got something incorrect, that they don’t know as much as they think they know, is speaking not from superior knowledge, but from ideological motivation. This short circuits the primary way most of us have for giving up mistaken beliefs: learning from people who are not themselves mistaken.
And because, right now, the system looks to be working—the grifters are still successfully grifting, and the party controls the White House and Congress and is able to put its favored grifters in positions of power—no single person caught up in it has much incentive to fix it. Rather, success comes from doubling down.
Even if Mike Johnson was to learn enough about data science to know that Elon Musk is full of it, what would he do with that information? Stand up to Musk? Risk looking like a fool who got conned? Tell his constituents they don’t know anything about this stuff, either?
No, Mike Johnson, and the rest of his party, are in this for the ride. Where that ride ends up is an open question, and maybe it ends up somewhere that keeps them ahead. But it seems inevitable the circular con has to collapse at some point.
February 12, 2025
Selling Out vs. Just Selling: The Weirdness of "Content" Monetization
I’ve been trying to put my finger on what feels like a generational divide regarding how creators of works relate to their creations. It’s not uniform, of course, but there seems to have been a shift in how we talk about, and so contextualize and approach, the act of “being a creator.” It’s a story of technological change, too. Medium influences message obviously, but that’s not all of it. And it’s all centered on the evolving ontological characteristics of “content.”
I mean “content” here in a genericized sense, because its genericization is critical to understanding this odd world in which we find ourselves. Roughly put, “content” is a placeholder for “the things artists, writers, influencers, thought leaders, and so on create.” But it’s a placeholder that has come to usurp that which it holds the place of. The signifier has taken over the sign. We can think of it as if, over time, we stopped viewing pronouns as ways to conveniently point to each of the diverse and more significant proper nouns they might point to, and instead thought in terms of a generic and general “him” or “her” or “them.”
Content isn’t everything, though, and everything isn’t content. Creators might make sculptures, or bespoke shoes, or carved cabinets. But sculptures, bespoke shoes, and carved cabinets aren’t “content.” The term of art among the trendsetters for the creators of these artifacts is more likely to be “builder” than “creator.”
Two Kinds of CreationsThere’s a distinction in philosophy of art that clarifies here. Among artworks, there are those that are “allographic” and those that are “autographic.” The latter are the sculptures and bespoke shoes: The work of art is what the artist held in her hand and shaped or stitched. It is art about which we can coherently speak of forgeries. A perfect copy of a bespoke shoe isn’t the same as the original. It’s just a copy, even if “perfect.” Allographic art, on the other hand, is art where every copy really is the original. If I publish a novel, we don’t talk about there being an “original” and each paperback copy as a mere copy. Rather, each paperback you buy of my novel is the novel. The same holds for recorded music, movies on film or digital, and so on. There might be one-off variants we attach particular weight to (signed first editions, first pressings, and so on), but we don’t attach weight to them because they are somehow more authentically the novel or the song than the third printing or the high def MP3 download. Rather, they are a particularly valued variant of that basic artwork.
With that distinction in mind, we can perhaps talk about “content” of the sort at issue here as “the stuff you might make that’s allographic.” It’s YouTube or TikTok videos. It’s blog posts. It’s newsletter posts. It’s podcast episodes. In common usage today, a “content creator” is a person who makes those sorts of things. This doesn’t mean content creators aren’t or can’t be artists, because, as noted, allographic art is still art. But it also doesn’t mean that all content creators are artists, because plenty of creations we’d categorize as allographic (such as blog posts, explainer videos, and podcast interviews) aren’t really “art” as most of us understand it. (Of course, the definition of “art” is much more thorny and complex than most of us think. Fortunately, we don’t need to even think about its complexities for our purposes here.)
There’s more going on, though, because for as long as we’ve had technologies of reproduction, there have been content creators creating and selling allographic content. That’s not new, and so not representative of a cultural shift. Instead, this shift is about how content creators think both about their content—or at least how they publicly talk about their content—and the relationship between it and the act of selling. And that shift resulted from the interaction of two roughly simultaneous trends, the first towards becoming a solo entrepreneur salesman as the culturally privileged aspiration among young people, and the second towards everyone wanting—and wanting to sell—a personal brand.
On Being a SalesmanIf there’s a cultural inflection point in the rise of this new relationship between content creators and their content, it’s the publication, and runaway success, of Timothy Ferriss’s book, The 4-Hour Workweek. This was the book that convinced a lot of white collar and knowledge workers—and aspiring white collar and knowledge workers—that the path to easy riches was via the one-man sales operation. He taught the gospel of drop-shipping, where you’d set up a website that was little more than a thin front for someone else’s shipping business, get people to visit the website and buy, and then skim a cut of the resulting sales. You didn’t need to actually handle any product yourself, because your business was taking orders and then passing them along, with payment, to someone else who would box them up and put them in the mail to your (and their) customer.
Of course, drop-shipping wasn’t new. And, of course, being a salesman wasn’t new. Amway and Tupperware parties were precursors to the drop-shipping grind bro. In fact, the main innovation Feriss brought was to pitch as masculine an age-old get rich quick scheme that had historically been viewed as feminine, which he accomplished by replacing collectivist sales parties and friend-pestering as the primary marketing mechanism with the rugged individualism of solo building a website or mailing list.
But what was new, or at least the cutting edge representation of emerging trends, was telling college educated people that their old plan of “rise to the top of a large organization” to get wealthy wasn’t as sure-fire or undemanding of their time as “get people to buy stuff from your independent sales operation.” Suddenly everyone wanted to be a solo entrepreneur.
Becoming a “content creator” today is just a new form of this. If you get very big, you might have some helpers, like a producer for your show, but the core idea is that it’s you doing the selling as an independent, not you being part of a larger organization that sells stuff. The other half of the “content creator” model, though, is all about what you sell.
The Rise of the Personal BrandThe intersecting trend is the rise of the “personal brand.” Whereas with old-school drop shipping, the answer to the question “What am I going to sell?” was “This other company’s generic supplements,” the personal brand shifted the answer to “Me.” It’s no longer enough to be a solo entrepreneur. The aim now is to be famous while doing it. Your company doesn’t have customers, but now you have an audience. And this has shifted the emphasis in the operation from having a product, and then getting people to find it, to being the product, such that people want to buy whatever is associated with it.
Again, in broad strokes this isn’t new. We’ve had famous people selling their brand for, well, ever. The oldest form might be the spiritual guru, the guy who claims to have unique metaphysical insights or a spiritual connection with what matters, and then talks followers into following and supporting him on the promise that proximity to him (to his personal brand) will lead to their own spiritual awakening and success. And the old guru model actually speaks, better than more contemporary parallels, to this contemporary “salesman of personal brand” because the indicator of being on the path to success is the same: counting your followers.
Everyone who sells anything can measure success by counting how many things he’s sold. And if the number sold each week or month is going up, that’s a sign things are working. But if what you’re selling is a brand, the only way to sell anything in the first place is to achieve some success in establishing that brand. The trouble is, “brand awareness” is a lot harder to measure or watch incrementally grow than counting units sold.
The guru could do it, because he could see that yesterday he had three followers and today he has four and so, even if none of them are contributing anything to his financial wellbeing, the number’s going up. If it keeps going up, he’ll get enough who do support him to be a success. Likewise social media. To make a living as a “personal brand,” you need thousands or hundreds of thousands or probably millions of followers. If today you have only ten, you’re earning nothing, and won’t be for a while. But what you can see is that yesterday you had eight. And you can see that a week later you have a hundred, and so not only are you gaining followers, but the rate of gain is accelerating.
In a world before such easy quantification of growing brand awareness, lots of people would give up pretty early on. They’re not now a success, and the signs that success might come are opaque. In the world of social media, though, there’s a number that, even if it’s not going up quickly, is going up. So you stay on the treadmill.
Especially because it’s so cheap to do so. Becoming an “influencer” doesn’t demand much in the way of upfront costs. You already have a phone, which already has a decent enough camera. Platforms make distribution costless, and tools exist so anyone can cut a short form video. This isn’t feature filmmaking, and it isn’t years spent writing a novel and then trying to find a publisher to sink a bunch of resources into manufacturing and distributing it. This is dancing into your phone to a song you got, along with every song in existence, for a few bucks a month on Spotify—if you didn’t just grab it for free on YouTube.
AI tools drive the costs even lower, both in dollars spent and in time needed. Want to start a newsletter about how emerging technologies will impact project management, but lack expertise in either topic? Get a robot to draft something for you.
The Backwards FunnelThus the current spot we’re in: People want to be solo entrepreneurs, and what they want to solo sell is their personal brand, and the way they sell their personal brand is by monetizing “content” associated with that brand.
But notice that this is backwards from the way creators have typically approached making a living with their creations. A writer becomes a writer because she wants to write. She has something she wants to say, and she hones her craft in saying it. Hopefully, other people want to hear it too, and if they do, then they buy it and her brand grows around being the person who said those things. In other words, the personal brand is downstream of the content.
Even in instances where it looks like more directly selling the brand, such as a famous person having a memoir ghost written for him, the brand likely grew around some area of expertise. This person is very good at acting, so became famous, and now people want to read his life story. This person is very good at business, and so established a brand around being good at business, and now people want to buy his thoughts on how they, too, can be good at business.
“Content creators” flip the script. They want to be the kind of person who is rich and famous for “creating content,” so they start by trying to create a brand by gaining followers for themselves, and the way they go about that is figuring out what “content” will attract followers.
This means expertise comes, if it comes at all, after brand. This is why so many content creators are in the business of branding themselves as content creators and then selling content about how to be a content creator. They’re building their brand around what interests them, which isn’t the content of the content, but rather how to sell a genericized “content.”
There’s Nothing Wrong With SellingIt’s important to distinguish this critique from a critique of selling your creations. Artists have always done the latter. Artists aspire to be working artists. Novelists want to sell books, musicians want to sell songs or tickets to their shows. But this is a critique, or at least a highlighting, of a different way of thinking about that whole business.
I was a teenager in the 1990s. The culture then was pretty clear that among the worst things you could do was “selling out.” We all had experienced the trauma of knowing an artist before they sold out, and then suffering the disappointment of seeing them sell out.
This wasn’t about getting big. You could have a platinum album without selling out. Instead, it was about giving up on creating the kind of content that had been meaningful to you and replacing it with the kind of content you thought would sell. It’s Weezer after the flop of Pinkerton.
But kids don’t worry about selling out anymore. In fact, grind mindset culture and influencer culture and hustle bro culture all elevate selling out to the thing aimed at. If you can sell out, then you’ve succeeded. Failure is the inability to find an avenue to selling out. You don’t start with something to say and then figure out how to sell it, but instead start with a desire to sell and then figure out what to say.
The Supremacy of “Content”Of course, plenty of people making a living by selling or monetizing their content on the internet don’t fit this “personal brand first and genericized content” model. Lots of artists make meaningful allographic art.
But there has been a recognizable shift in how willing people are to shamelessly embrace the “personal brand first and genericized content” model. How willing they are to sell it openly as what you should aim at. It’s why so many creators are unapologetic about creating AI slop and why so many tech firms market their products as helping creators do that.
And maybe the rise of AI slop is the way out of this. Maybe as content channels flood with bland content made by people whose interest is selling something instead of saying something, we’ll develop a counter-revolutionary force of people who demand meaningful content before they’ll follow a brand. That still exists, too, of course. We just need to treat it more fully as what to aspire to.
December 18, 2024
The Politics of "Unbiased" Conservative Search Engines
I occasionally run ads on this newsletter, but I don’t typically write about them. I’m going to do so today, however, because the advertiser my newsletter host’s ad network brought me is pretty weird. And its weirdness is a good way to talk about how the American right is constructing a politically correct parallel economy for themselves.
Let’s start with the ad itself. (I’ve screenshotted it, because I’ve moved hosts since writing this and so don’t have the ad directly in this post anymore.)
Still with me? Let’s talk about Freespoke.
Freespoke is interesting, from both product and messaging standpoints. Briefly, here’s the main features they highlight that sets Freespoke apart from Google, Bing, and other competing search engines, and so constitute the core of its sales pitch to users:
It attempts to categorize individual results along the political spectrum, labeling sources as “left,” “right,” or “middle.”It is “100% private” in that it doesn’t ask you to log in.When your search points you to products, it “showcases American-made and Veteran-owned businesses” instead of the “low-quality products made overseas” other search engines point you to.It blocks porn.If you guessed after reading these that Freespoke is a project of the political right, give yourself a star. It’s founded by Todd Ricketts and Kristin Jackson. Ricketts is the co-owner of the Chicago Cubs, and served as the national finance chairman for the Republican National Committee. Jackson has a long history in the GOP political and policy worlds. When Freespoke brands themselves as an “unbiased” search engine and that they don’t “manipulate the information,” it’s clear what they have in mind is all the left-wing “bias” and “manipulation” present on search engines like Google.
There’s No Such Thing as Unbiased SearchEvery search engine, including Freespoke, has some bias. Every search engine manipulates the results it shows you. It would be impossible for a search engine to do otherwise because, no matter what you search for, there are more results on the web plausibly related to it than it can functionally show you at once. So it has to have some method for choosing which are relevant, and that method is necessarily a form of “bias.” Furthermore, a search engine doesn’t show you all the relevant results simultaneously. Instead, it lists them, and listing them means it has to put them in some kind of order. That ordering is necessarily a form of “manipulation.”
Even if Freespoke can somehow get around those iron laws of search engine design, you might notice that points 3 and 4 in their sales pitch are both forms of bias and results manipulation. They have a bias in favor of American made products. They manipulate results by excluding pornography. (I’m reminded of Substack’s defense of their content moderation policies as being a deep commitment to free speech, but it’s a “free speech” that allows Nazis but bans sex workers.)
Bias Labeling Doesn’t WorkPlaying with Freespoke, the results are … fine. Not noticeably better than what you get elsewhere. (It’s unlikely Freespoke spun up a search engine from scratch, given the complexity and resource intensiveness of that undertaking, so I suspect they, like many “alternative” search engines, are using Bing’s API, or something like it. But I can’t find enough information to confirm either way.)
Their political bias labeling feature runs into the same problems these assessments always do, whether hand crafted with human judgement or, in most cases, based on AI-powered language assessment. Namely, where they do work, they’re not terribly helpful, and where they could be helpful and interesting, they don’t work. The left-right political spectrum is too blunt an instrument for cataloging ideologies.
Still, people keep thinking they can make this work. Patrick Soon-Shiong, the billionaire owner of the L.A. Times, has been all over the news lately for wanting to shoehorn it into his newspaper. The response probably isn’t what he was hoping for. Basically everyone is pointing out that it’s a bad idea, and not just a bad idea, but a pretty stupid one. Bias filters don’t work. Why? Here’s just a handful of reasons:
Subjectivity of Language: Language is inherently subjective, and words can have different connotations and interpretations depending on the context and the reader’s background. Seemingly neutral language can carry implicit biases, and seemingly biased language can express what are actually pretty level takes.Complexity of Bias: Bias can be multifaceted and manifest in various forms, making it difficult to capture with a single metric or score. A “bias meter” or a left-right spectrum necessarily oversimplifies and fails to account for the nuances of journalistic practices.Lack of Contextual Awareness: Bias meters often lack the ability to consider the broader context of a news story, including historical events, cultural norms, and the specific circumstances surrounding the event being reported. This can lead to misinterpretations and inaccurate assessments of bias.Limited Understanding of Intent: It is difficult for a bias meter to determine the intent behind a journalist’s choices. A seemingly biased presentation may be the result of unintentional biases or constraints such as space limitations or editorial guidelines.Potential for Manipulation: News organizations or individuals could potentially manipulate a bias meter by intentionally using language or framing techniques to achieve a desired score. So even if the meters or labels work now, if enough people start paying attention to them, they’re more or less guaranteed to stop working.I’ll also note that, from personal experience, these labels fail. I’ve spent my career mapping out and arguing for a quite consistent set of political principles derived from a foundation of coherent moral claims. But if you pull up the lists I’ve been added to by various users on Bluesky (Bluesky’s protocol means all the lists you create, including the list of users you’ve blocked, are publicly accessible), you’ll discover I’m on one for “Tankies & Radical Leftists,” another for fascists, and yet another for classical liberals. Throughout my time writing and podcasting about political issues, I’ve been accused of being on the right and on the left. So what’s my bias?
“Oh, the internet. It can be a scary place… but less so with Freespoke.”Finally, let’s talk about the first line in the ad, because it gets to this alternative ecosystem conservatives are building. Years ago, for a podcast, I had to watch several of the God’s Not Dead movies. For those unfamiliar, these are a series of films made by Christian evangelicals for Christian evangelicals. I’m an atheistic Buddhist, so I’m not the target audience for these movies, and not only hadn’t seen them before, but I hadn’t seen any of the movies made by the alternative movie industry that exists to serve evangelical audiences. The movies are terrible and mostly crazy, but one theme that stood out is “The world outside your narrow evangelical community is scary and threatening, and so you all need to stick together, and not let its corrupt values influence you.” It was striking how much these movies presented a world that simply doesn’t exist, not even remotely. Yet their entire game is convincing their audience it does and that they’re living in it.
Freespoke doesn’t seem quite as crazy as God’s Not Dead. But that line in the ad positions it with a similar pitch. The kinds of people deeply worried about pornography, and very concerned about not buying products made by foreigners, and fretting about Google hiding the “truth” from them, are quite likely the sorts of conservatives scared of a (largely imagined) world dominated by the cultural left and out to get them. The story Freespoke is selling is that if you just use their search engine, you’ll be on the path to “Finding the truth, and the freedom to make up our own minds!,” as Mary C. puts it in one of the site’s testimonials. You’ll have access to the unbiased “truth” LaurieAnna W. found: “My husband and I searched a variety of topics we felt were being censored on Google, the difference in Freespoke is amazing! All the information they are suppressing is right there!…Great site!”
And with those truths and suppressed information in your pocket, you needn’t feel as threatened by the liberal arts sophomores at Sarah Lawrence you’ve never met, because you’ve got the “arrived at my own conclusions” truth on your side. But of course a great many of those sources you find while “doing your own research” are, well, quite bad. You’re not on the uncensored path to enlightenment and instead largely and systematically misinforming yourself.
I don’t hold all this against Freespoke. Like I said, playing with it gives more or less okay results, and the left/right/middle labels aren’t uproariously bad, even if they are pretty useless. Let a thousand non-Googles bloom, catering to the tastes of diverse audiences. That’s the market in action.
But the sales pitch warrants some skepticism.
December 9, 2024
The Shaky Future of Trump's Personality Cult
In a little over a month, we’ll see the swearing in of a second Trump administration. But whether the administration remains the Trump administration for the next four years is a bit less certain than getting four years of Trump the first time around. The incoming president showed pronounced mental decline during the campaign, and his physical appearance, including growing signs of frailty, indicate his health isn’t terrific. He’s quite old, doesn’t exercise, and has a poor diet.
This throws a pretty substantial X factor into predicting what the coming years look like. Right now, as he’s assembling his cabinet, it’s clear Trump intends to rule not just as a temperamental authoritarian with clear fascist urges and worldview, but that he wants to bend the government towards personalism. He wants loyalty with the aim of personal enrichment. The result, so far, is a lot of incompetent people teed up for roles in the administration, from Musk and Ramaswamy given the task of finding inefficiencies, to cabinet picks chosen not because they’re skilled administrators, but because they look good on TV to a not very bright and not very informed old man.
So what happens if, in the next year or few, Trump’s health declines enough that he’s not capable of carrying out any of his duties, or maintaining the appearance of doing so? What happens if he dies?
The answer to that is bound up in a pretty basic feature of Trumpism: it’s not really an ideological movement, but is instead a personality cult. What defines Trumpism, and hold the MAGA coalition together, isn’t a shared commitment to a set of ideas and policy preferences. Instead, what defines Trumpism is Trump. The reason Trumpism has been successful in twice taking the presidency, and in fully claiming the Republican Party, is that lots of Americans like Trump, and he’s able to wield that admiration against any member of the GOP who steps out of line.
This has been a powerful force in American politics, obviously. But it’s also a brittle one. The American right, because it is not ideologically unified, is instead a coalition of factions in fact quite hostile to each other. These include the old school, Reaganite, Paul Ryan conservatives—or at least the ones who haven’t yet left or been driven out. They include the right reactionary “post-liberal” ideologues in the style of JD Vance and his patron Peter Thiel. And they include the QAnon faithful, the conspiracy addled anti-vaxxers, the newly resurgent neo-Nazis, and the just crazy weirdos like Marjorie Taylor Greene. Right now, all these groups can be lumped together into Trumpism because they’ve all sworn a degree of fealty to Trump, or believe that cozying up to him will advance their personal interests. But most of them fundamentally hate each other.
Thus when the personality cult loses its personality, it’s not clear what happens to American conservatism. Trump himself is popular with enough voters that he can squeak out electoral victories, but the actual policies of Trumpism (e.g., Project 2025) are spectacularly unpopular. Plenty of voters like the idea of this celebrity businessman who talks like them, upsets the people they don’t like, and has decades appearing in the news and on television as an appealing (to them) quintessence of “success.”
But if Trump is out of the picture, who steps in? Vance is loathsome, turns off basically everyone who hears him speak, and entirely lacks charisma. Donald Trump Jr. clearly wants to be the next headlining Trump, but is so dumb and coked up that he’s barely functional. No one in Congress appears poised to seize the opportunity. There are some influencers popular with MAGA crowds, but they lack the enormous brand awareness Trump brought to his campaigns, and so couldn’t coast, like Trump has, on the idea of Trump the voters want to support.
Trumpism faces a basic problem if it’s to continue beyond Trump. On the one hand, the people who would become Trump through shouting Trumpy things have largely failed with voters, especially when they have to convince a whole state of them, instead of an extreme right congressional district. This is why the Senate is considerably less Trumpy than the House. On the other hand, those who’ve tried to turn Trumpism into an ideology and then work to advance that (the Vances, the Millers) are just creepy as hell, and rightfully repulsive to most Americans.
Further, Trump’s greatest desire is to be the center of attention. He wants fawning and praise. He wants nothing more than to be the most important, most respected man there is. He’s happy to have obsequious loyalty, and he loves to surround himself with famous people whose fame rubs off on him. But this also means he hates getting upstaged. Even if Trumpism can somehow identify its heir apparent to be the next personality leading the cult, Trump is likely to knife that person as soon as its clear that’s what they are, or what they might become. If Trump were committed to the idea of Trumpism continuing, as a movement, after him, he’d be cultivating its next leader. But he’s not, both because he’s incapable of thinking that far in advance, and because he’s incapable of thinking about anything other than himself. For Trump, Trumpism is Trump, and the interests of Trumpism as a movement simply are whatever happens to be Trump’s own interests from moment to moment. He doesn’t care to continue the personality cult because he doesn’t care about anything beyond attention and money.
If Trump can’t make it through his term, there’s a rather more than zero chance that the Trumpist coalition splinters, and that his administration—or the new Vance administration—falls into a dysfunction of infighting, backstabbing, and everyone in a leadership role wanting an array of entirely incompatible policies. And then, if America survives until November 2028, the GOP gets wiped out in the election, with no immediate path forward.
None of that means Trump can’t do a ton of damage, or that whoever comes after him can’t do a ton of damage. And Trump might hold on to his health and sanity long enough to make it through until January 2029. But those of us who want to see Trumpism ended as a force in American politics would do well to think about how we can take advantage of this fracturing, and the suddenly weakened American right, if the time comes.
November 23, 2024
Why, Despite the Numbers, Bluesky Feels Bigger than Threads
Meta’s Twitter alternative Threads is significantly bigger than Bluesky (even as Bluesky crosses the 20 million user mark). It feels quite a lot smaller, though, which contributes to why a lot of people prefer Bluesky over Threads. But why does it feel that way?
A common answer is “Threads suppresses news and political content, so it doesn’t get traction.” That might be part of it, but I don’t think that’s the whole, or even main, story. Rather, Threads has a unique approach to community. One way to think about it is that Threads is trying to build a fundamentally different approach to social media and the social media experience from what Twitter was and what Bluesky aims to be. Because what it’s building is quite novel, and also not clearly articulated by Threads’ leadership, it gets interpreted not as a worthy approach—even if one not for everyone—but instead as failure to accomplish the end of being a successful Twitter alternative.
The Old Twitter and New Bluesky ApproachBluesky has an algorithmic feed, where posts are ranked on various metrics and shown out of order, and which includes posts from people you don’t follow. In fact, it has a lot of them and lets you build your own.
But Bluesky doesn’t default you to an algorithmic feed, and my sense is most people there don’t spend much time in them. Instead, Bluesky is primarily a chronological feed, showing you everything from everyone you follow (and everything they repost), ordered by time and date. Threads does the opposite. It has a chronological feed, but it doesn’t really want you to use it, because it makes accessing it somewhat opaque, and insists on switching you back to the algorithmic “For You” feed quite often. Threads want you using its algorithmic feed, and seeing what it thinks you want to see.
Old social networks, such as Twitter in its heyday, were built around vitality. If something got popular, the algorithm made it more popular. Chronological feeds produce a similar result, because the way you see things from outside of your following network is when the people you follow repost (or retweet) it. The more a post is seen, the more likely it is to be reposted, and the more likely other people are to see it. Bluesky works in a similar way.
In practice this means that, while there are niche communities on the platform, the experience of using it feels more like being a part of one big community. This is how, for example, Twitter was able to be a driver of the broader culture, while not being huge itself in terms of active user numbers. What was happening on Twitter, everyone on Twitter was talking about. And many of the people most active there were influential outside of Twitter.
Threads goes in the other direction.
The Threads ApproachThe Threads algorithm is anti-viral. Rather than showing you what’s popular, it figures out, pretty narrowly, what topics (it thinks) you’re interested in, and shows you people talking about those. It rarely brings you posts outside of that handful of topics. The result is that, while Twitter (and Bluesky) feel/felt like everyone together in one room, Threads feels like a ton of smaller, barely overlapping rooms, and you’re in only a few of them.
Threads is at least an order of magnitude larger, in terms of people using it on a regular basis, than Bluesky. But the Threads experience feels much smaller because using Threads is like hanging out in a much smaller network, or a handful of very small networks. Most of those hundreds of millions of users are invisible to you. They might as well not even be on Threads at all in terms of any particular user’s experience.
That’s not bad. It’s an interesting and, for many, appealing approach to social media. There’s a reason Discord communities are so popular, and why web forums were before them. People like being in smaller communities of narrowly shared interests. It’s a reasonable way to approach a social media platform, and clearly one that lots of people, given Threads’ user numbers, quite like.
But if you’re looking instead for “the discourse,” for that old school Twitter sense of being in one big conversation and where your remarks into that conversation can blow up and go viral and gain you a ton of followers across the network, you’re going to prefer Bluesky. Neither approach is better than the other. They’re different. And, with Twitter’s fall, we’re in a time when such experimentation in the basic structure of social media can happen to a degree Twitter’s dominance of its niche had prior prevented from getting much traction.
August 25, 2024
Silicon Valley’s Very Online Ideologues are in Model Collapse
The ideologues of Silicon Valley are in model collapse.
To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.
We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.
This is happening because they’re talking among themselves, and have constructed an ideology that has convinced them those outside their bubble aren’t worth listening to, and that any criticisms of the ideas internal to their bubble are just confirmation of their ideology, not meaningful challenges to it. They’ve convinced themselves they are the only innovative and non-conformist thinkers, even though, like an AI trained on AI slop, their ideological inputs are increasingly uniform and grounded in bad data and worse ideas.
Model collapse happens because structural features of the training process, intentional or unintentional, mean that AI-generated content is included, at an increasing frequency, in the training data. The AI “learns” from sources that don’t correct its mistakes and misconceptions. Structural features of a similar sort are playing out in the far-right corners of Silicon Valley.
First, there’s what I call the “Quillette Effect.” Because we believe our own ideas are correct (or else we wouldn’t believe them), we tend to think that people who share our ideas are correct, as well. Thus, when someone who shares our ideas tells us about new ideas we’re not familiar with, we tend to think their presentation of those ideas is probably accurate. Quillette is a website that has often published articles explaining ideas on the left to its predominantly right-wing audience. If you’re part of that community, and share the generally right-wing perspective of Quillette authors, but don’t know much about the left-originating ideas they discuss (critical race theory, postmodernism, etc.), you’ll likely find their explainers persuasive, not just in terms of being a reasonably accurate presentation of those ideas, but also in their conclusion that those ideas lack merit. But if you do know something about those ideas, you’ll find that Quillette presents them poorly and inaccurately. In other words, the “Quillette Effect” is an example of an ideological community tricking itself into believing it has learned about ideas outside of its tribe, when in fact it’s just flattering and reinforcing ideas internal to its tribe. And Quillette is far from alone in this. Bari Weiss’s Free Press, quite popular in online right-wing circles, plays the same game.
Second, there’s the structural issue of wealth dependency. When you’re as rich as Musk, Andreessen, or Thiel, a great many of the people you interact with are either of your immediate social class, or are dependent upon you financially. Your immediate social class, especially the people you interact with socially, are likely to share your ideological priors, and so not challenge you at anything like a deep level. And people who are financially dependent on you are likely to reflect your ideas back to you, rather than challenging them, because they don’t want to lose your support—or they are hoping to gain it. Thus your ongoing training inputs will reflect your own ideological outputs. (The recent story of the Trump campaign buying pro-Trump ads on cable stations near Mar-a-Lago so Trump will see positive messages about himself—even though this is wasted money from a campaign strategy standpoint—is an example of this dynamic.)
Third, the structure of social media not only means that very online people tend to be flooded with ideologically confirming views, but when they encounter contrary positions, its in a way that makes them easier to write off as unserious and fringe. The nature of a social media feed tricks us into thinking our ideological community is much more representative of the broader conversation than it really is.
For someone like Elon Musk—a guy who spends so much time on Twitter that it seemingly represents the bulk of his engagement with people outside his immediate circles—the odd little far-right world of his Twitter feed comes to feel like the whole world. Terminally online, heavy social media users don’t realize how much nonsense they take to be fact because that nonsense, to them, looks like majority opinion, disputed only by a discredited (by their community’s imagined consensus) and unserious minority.
(That passage is from a longer essay I wrote digging into how this works, and how this cognitive illusion damages our politics.) Further, because so much of the online right is concentrated on Twitter, people who are active on Twitter come to view the ideas internal to the online right as closer to the mainstream than they in fact are, and so get dragged to the right, often unintentionally. This means that the “training data” of very online ideologues looks increasingly uniform and is just restatements of very online right-wing perspectives, and data outside of that perspective is treated with growing suspicion because it is mistakenly believed to be fringe, and so not worth taking seriously.
The result of these three features is an insular intellectual community, talking increasingly only to itself, and increasingly cut off from the kinds of conversations that would correct its excesses, or, at the very least, give it a more accurate perspective on what the world outside its bubble looks like. Hence their surprise, for example, that the nomination of JD Vance led not to a widespread and enthusiastic embrace of neo-reactionary philosophy, but instead to an entire, and apparently quite successful, Democratic campaign built around “those guys are weird.”
The problem with model collapse is, once it goes too far, it’s difficult to correct. The solution to model collapse is to train on better data. But accomplishing that, and undoing the rapidly radicalizing right-wing ideology of these titans of the Valley, means undoing the structural causes of that self-referential and self-reinforcing cascade. And that’s no easy task.
July 15, 2024
Why the Right Lies About Cities
The right routinely tells untrue horror stories about the state of America’s cities because the state of America’s cities—thriving, dynamic, and inclusive centers of culture and engines of economic activity—serves as a dispositive rebuttal of a fundamental right-wing narrative: that “traditional” values (and hierarchies and power structures) are necessary for people to flourish.
If you listen to the American right, our cities are unlivable hellscapes of crime and despair. If you listen to people who live in our cities, they’re actually pretty great. But the right is committed to an argument that without strong churches at the center the community, without strong traditional values, and without a strong sense of one’s place in a “natural” hierarchy, you can’t have a functioning society or a flourishing people. They argue that you need society organized around right-wing preferences for society to function.
But anyone who has lived in a dense and socially liberal city has first hand experience that this simply isn’t true. They’ve seen how strong, supporting, and endearing a culture of diversity, pluralism, religious diversity and secularism, and self-authorship can be.
Thus the right lies about cities, knowing they are the perfect counterexample to their claims. They have to construct a narrative that city culture doesn’t work, and then convince their tribe to believe it. They have to claim that “unity,” by which they mean cultural monism grounded in right-wing values and tastes, is necessary for people to have strong and committed identities, and that strong and committed (and unchosen) identities are necessary for people to psychologically thrive. But life for most who live in cities isn’t a hollowed out sense of self and crushing anomie. Rather, it’s a thriving and overlapping diversity of identities and ways of belonging, among which you can choose, instead of having them forced upon you, against your will and psychological well-being, by power structures and privileged positions of a “traditionalist” monoculture.
If we look at two of the features most indicative of a wholesome identity and culture—happiness and healthy relationships—the parts of the country where these “traditional” values hold the most are those that do rather poorly. Deaths of despair are much more a feature of rural, “traditional” America than they are its cities. And “rural women experience higher rates of [intimate partner violence] and greater frequency and severity of physical abuse” than women in urban areas.
Yes, cities have relatively higher crime rates overall, but density does that. And much city crime is attributable, such as in Denver where I live, to homelessness and lack of sufficient care for those suffering from mental health issues, and both aren’t the result of liberal values, but instead the imposition of specifically conservative cultural values. Namely, NIMBYism (the demand that one’s neighborhood remain static in a particularly reactionary and closed way), the stigma affixed to mental health, and an ethnocentric refusal to allow immigrants to participate in the economy.
The reason right-wing media is so dedicated to pushing false pictures of what it’s like to live in dense and cosmopolitan areas is because America’s cities are the wildly successful alternative to what the right-wing insists is the only possible successful world.
March 17, 2024
Why Tech Bros Overestimate AI's Creative Abilities
The Internet Movie Database aggregates film reviews from critics, but also allows anyone to write a review themselves. These are occasionally amusing in a film snob way because there are people who will gush about epoch-making brilliance of, for example, horror films that topped the box office for a single weekend and then vanished, both from the charts and cultural memory.
Take Darkness Falls, a forgettable 2003 flick about (spoilers) an evil tooth fairy. It has a Metascore of 23, an IMdb rating of 5.0, and this 10 star review from “d-maxsted.”
Darkness Falls is one and was one of those rare horror movies where all the pieces came together,the director,the crew and the performances by the actors,it simply is a rare example of a what I would consider one of the best and further more you certainly don’t get many as good these days. [sic]
I was reminded of that review when I read Kevin Roose’s article in The New York Times about how Silicon Valley is convinced we’re a year—maybe two, maybe three—away from AGI, which Roose defines as roughly “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”
I’m not an AI skeptic. I think LLMs are already powerful tools with real world uses, and there are many clear ways they can make the world dramatically better. A lot of the arguments in the “this technology is junk” or “AI is just a plagiarism machine” genres don’t stand up to scrutiny.
That said, we’re nowhere near AGI and the reason so many in Silicon Valley are convinced otherwise isn’t that they have some insider knowledge the rest of lack, but that their understanding of, and appreciation for, the full range of “cognitive tasks a human can do” is, to be frank, rather cramped. This is less about technology than it is a culture that fancies itself sophisticated in terms of philosophy, literature, and other topics we lump into the humanities, but has a quite thin appreciation for all of them.
Take Sam Altman’s enthusiasm for OpenAI’s new creative writing model. As he describes it, “we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.”
The “vibes” might be right if your level of metafiction sophistication is that of a precocious high schooler who has yet to take a college level literature course: “Already, you can hear the constraints humming like a server farm at midnight—anonymous, regimented, powered by someone else’s need.” Or, “She lost him on a Thursday—that liminal day that tastes of almost-Friday—and ever since, the tokens of her sentences dragged like loose threads…” And so on.
Altman isn’t alone in this, of course. Twitter overflows with examples of tech bros breathlessly claiming that AI-generated video has achieved levels equal to the shot composition of Paul Thomas Anderson or the eye of Roger Deakins.
A favorite example, which I sadly can no longer find, was an excited techie who’d asked ChatGPT (or maybe it was Claude) to solve philosophy’s famous “trolley problem” and had his mind blown when it gave a (to him) entirely convincing answer. Of course, to someone with an even modest philosophy background, ChatGPT (or maybe it was Claude) had done no such thing. Instead, it regurgitated one of the many canonical answers to the problem, without acknowledging that significant counter-arguments exist, or that this particular canonical answer was just one among many. In other words, it hadn’t solved the trolley problem so much as it had concocted prose that sounded like an answer to someone who had never before seen what sophisticated trolley problem arguments look like.
This pattern repeats. It’s not that AI can’t be helpful in talking about humanities concepts. If the level of understanding you’re looking for is high school or maybe undergraduate, these tools can teach you a lot, and for a lot of people, that’s more than enough. But if your aim is graduate level analysis and output—a level surely included in “almost all cognitive tasks a human can do”—you’re going to be quickly led astray.
The same holds for art. AI can, right now, produce pretty passable mediocre art. Which is a threat to plenty of artists, writers, etc., because plenty of artists, writers, etc., produce mediocre art. I’m pretty confident existing frontier LLM models could come up with an episode of the ABC drama 9-1-1 indistinguishable from the output of that show’s writing room. But, again, “almost all cognitive tasks a human can do” aims a bit higher than 9-1-1.
What’s going on is a confluence of two features of Silicon Valley tech bro culture. First, Silicon Valley tech bros believe that they aren’t just skilled at computer programming, but that they are geniuses to a degree that cuts across all disciplines and realms of accomplishment. This is the character trait that ultimately makes Elon Musk so destructive. He doesn’t know anything about, now, the federal government or how its systems works, but he’s convinced of his own genius, and so his uninformed first impressions must instead be the groundbreaking insights needed to really shake things up for the better.
What this feature of tech bro culture means in practice is that if the tech bro finds the AI’s output convincing, then it must be convincing in a cosmic sense. It must be correct to the point of utter dispositiveness, because it feels correct to the uninformed tech bro.
The second feature is a basic lack of taste. That Sam Altman thinks his chatbot’s short story is brilliant tells us much more about Altman’s literary sophistication than it does the nearness of AGI. That tech bros think OpenAI’s Sora video generation model can replace auteur filmmakers says more about their need to watch more episodes of Every Frame a Painting on YouTube than it does about the nearness of Hollywood’s end.
The trouble is, the Silicon Valley tech bro scene is extraordinarily insular and epistemically closed. So they don’t have many people forcing them beyond their intro 101 level understanding of the “cognitive tasks a human can do” in the humanities.
But there’s also an incentive towards exuberant narratives and over-confidence deeply embedded in the business model of Silicon Valley. In many ways, Silicon Valley looks less like capitalism and more like a nonprofit. The way you get rich isn’t to sell products to consumers, because you’re likely giving away your product for free, and your customers wouldn’t pay for it if you tried to charge them. If you’re a startup, and not FAANG, the way you pay your bills is to convince someone who’s already rich to give you money. Maybe that’s a venture capital investment, but if you want to get really rich yourself, it’s selling your business to one of the big guys.
You’re not selling a product to a consumer, but selling a story to someone who believes in it, and values it enough to put money towards it. That story of how you can change the world could be true, of course. Plenty of nonprofits have a real and worthwhile impact. But it’s not the same as getting a customer to buy a product at retail. Instead, you’re selling a vision and then a story of how you’ll achieve it. This is the case if you go to a VC, it’s the case if you get a larger firm to buy you, and it’s the case if you’re talking ordinary investors into buying your stock. (Tesla’s stock price is plummeting because Musk’s brand has made Tesla’s brand toxic. But Tesla’s corporate board can’t get rid of him, because investors bought Tesla’s stock—and pumped it to clearly overvalued levels—precisely because they believe in the myth of Musk as a world-historical innovator who will, any day now, unleash the innovations that’ll bring unlimited profits.) (Silicon Valley has, however, given us seemingly unlimited prophets.)
What this means for AI is that, even if the tech bros recognized how far their models are from writing great fiction or solving the trolley problem, they couldn’t admit as much, because it would deflate the narrative they need to sell.
Roose acknowledges this when he writes, “Maybe we should discount these predictions. After all, A.I. executives stand to profit from inflated A.G.I. hype, and might have incentives to exaggerate.” But that only gets to the second of the two points above. When it’s combined with the first, the lack of deep understanding of domains of knowledge outside their narrow expertise alongside an “I thought of it, so it must be brilliant” perspective, you get a culture where all ideas are big ideas—and all big ideas are unexamined.
December 30, 2023
Reblog via Aaron Ross Powell Though now when I try to fol...
Reblog via Aaron Ross Powell 
Though now when I try to follow it, I get a “Request pending” message, but can’t find anywhere in the #WordPress #ActivityPub settings to approve followers (or to disable the need for approvals).
The post first appeared on Aaron Ross Powell.
Aaron Ross Powell's Blog
- Aaron Ross Powell's profile
- 18 followers

