Against modesty, and for the Fischer set

Over at Slate Star Codex, I learned that Eliezer Yudkowsky is writing a book on, as Scott puts it, “low-hanging fruit vs. the argument from humility”. He’s examining the question of when we are, or can be, justified in believing we have spotted something important that the experts have missed.


I read Eliezer’s first chapter, and I read two responses to it, and I was gobsmacked. Not so much by Eliezer’s take; I think his microeconomic analysis looks pretty promising, though incomplete. But the first response, by one Thrasymachus, felt to me like dangerous nonsense: “This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts.”


Motherfucker. If that’s what we think is right conduct, how in hell are we (in the most general sense, our civilization and species) going to unlearn our most sophisticated and dangerous mistakes, the ones that damage us more by the weight of expert consensus?


Somebody has to be “immodest”, and to believe they’re justified in immodesty. It’s necessary. But Eliezer only provides very weak guidance towards that justification; he says, in effect, that you’d better be modest when there are large rewards for someone else to have spotted the obvious before you. He implies that immodesty might be a better stance when incentives are weak.


I believe I have something more positive to contribute. I’m going to tell some stories about when I have spotted the obvious that the experts have missed. Then I’m going to point out a commonality in these occurrences that suggests an exploitable pattern – in effect, a method for successful immodesty.



Our first exhibit is Eric and the Quantum Experts: A Cautionary Tale wherein I explain how at one point in the 1970s I spotted something simple and obviously wrong about the premises of the Schrodinger’s Box thought experiment. For years I tried to get physicists to explain to me why the hole I thought I was seeing wasn’t there. None of them could or would. I gave up in frustration, only to learn a quarter-century later of “decoherence theory”, which essentially said my skepticism had been right all along.


Our second exhibit is Eminent Domains: The First Time I Changed History, in which I said “What happens when people move?”, blew up the Network Working Group’s original static-geographical DNS naming plan, and inadvertently created today’s domain-name anarchy/gold-rush conditions.


Our third exhibit is the big insight that I’m best known for, which is that while generation of software does not parallelize well, auditing it for bugs does, Thus, while we can’t hope to swarm-attack design, we can swarm-attack debugging and that works pretty well. Given a sufficiently large number of eyeballs, all bugs are shallow.


I’m going to stop here, because these are sufficient to illustrate the common pattern I want to talk about and the exploitation strategy for that pattern. But I could give other examples. This kind of thing happens to me a lot. And, damn you, Thrasymachus, where would we be if I’d been “modest”? If those second and third times I had bowed to the “idealized consensus of experts” – failed in courage the way I did that first time…would the world be better for it, or worse? I think the answer is pretty clear.


Now to the common pattern. In all three cases, I saw into a blind spot in conventional thinking. The experts around me had an incorrect premise that was limiting them; what I did was simply notice that the premise was there, and that it could be negated. Once I’d done that, the consequences – even rather large ones – were easy for me to reason out and use generatively.


This is perhaps not obvious in the third case. The incorrect premise – the blind spot – around that one was that software projects necessarily have to pay the full O(n**2) Brook’s Law complexity costs for n programmers because that counts the links in their communications graph (and thus the points of potential process friction). What I noticed was that this was just a never-questioned assumption that did not correspond to the observed behavior of open-source projects – the graph could be starlike, with a much lower cost function!


Seeing into a blind spot is interesting because it is a different and much simpler task than what people think you have to do to compete with expert theorists. You don’t have to build a generative theory as complex as theirs. You don’t have to know as much as they do. All you have to do is notice a thing they “know” that ain’t necessarily so, like “all computers will be stationary throughout their lifetimes”.


Then, you have to have the mental and moral courage to follow negating that premise to conclusion – which you will not do if you take Thrasymachus’s transcendently shitty advice about deferring to an idealized consensus of experts. No; if that’s your attitude, you’ll self-censor and strangle your creativity in its cradle.


Seeing into blind spots has less to do with reasoning in the normal sense than it does with a certain mental stance, a kind of flexibility, an openness to the way things actually are that resembles what you’re supposed to do when you sit zazen.


I do know some tactics and strategies that I think are helpful for this. An important one is contrarian mental habits. You have to reflexively question premises as often as possible – that’s like panning for blind-spot gold. The more widely held and ingrained and expert-approved the premises are, the more important it is that you negate them and see what happens.


This is a major reason that one of my early blog entries described Kill the Buddha as a constant exercise.


There is something more specific you can do, as well. I call it “looking for the Fischer set”, after an idea in chess theory due to the grandmaster Bobby Fischer. It’s a neat way of turning the expertise of others to your advantage.


Fischer reported that he had a meta-strategy for beating grandmaster opponents. He would study them, mentally model the lines of play they favored. Then he would accept making technically suboptimal moves in order to take the game far out of those lines of play.


In any given chess position, the “Fischer set” is the moves that are short-term pessimal but long-term optimal because you take the opponent outside the game he knows, partly or wholly neutralizing his expertise.


If you have a tough problem, and it’s just you against the world’s experts, find their Fischer set. Model the kinds of analytical moves that will be natural to them, and then stay the hell away from those lines of play. Because if they worked your problem would be solved already.


I did this. When, in 1994-1996, I needed to form a generative theory of how the Linux development swarm was getting away with breaking the negative scaling laws of large-scale software engineering as they were then understood, the first filter I applied was to discard any guess that I judged would occur naturally to the experts of the day.


That meant: away with any guesses directly based on changes in technology, or the falling cost of computing, or particular languages or tools or operating systems. I was looking for the Fischer set, for the sheaf of possible theories that a computer scientist thinking about computer-sciencey things would overlook. And I found it.


Notice that this kind of move requires anti-modesty. Far from believing in your own inadequacy, you have to believe in the inadequacy of experts. You have to seek it out and exploit it by modeling it.


Returning to the original question: when can you feel confident that you’re ahead of the experts? There may be other answers, but mine is this: when you have identified a false premise that they don’t know they rely on.


Notice that both parts of this are important. If they know they rely a particular premise, and an argument for the premise is part of the standard discourse, then it is much more likely that you are wrong, the premise is correct, and there’s no there there.


But when you have both pieces – an unexamined premise that you can show is wrong? Well…then, “modesty” is the mind-killer. It’s a crime against the future.

 •  0 comments  •  flag
Share on Twitter
Published on November 02, 2017 04:28
No comments have been added yet.


Eric S. Raymond's Blog

Eric S. Raymond
Eric S. Raymond isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Eric S. Raymond's blog with rss.