Eric S. Raymond's Blog, page 9
September 27, 2018
Solving shtoopid problems
There is a kind of programming trap I occasionally fall into that is so damn irritating that it needs a name.
The task is easy to specify and apparently easy to write tests for. The code can be instrumented so that you can see exactly what is going on during every run. You think you have a complete grasp on the theory. It’s the kind of thing you think you’re normally good at, and ought to be able to polish off in 20 LOC and 45 minutes.
And yet, success eludes you for an insanely long time. Edge cases spring up out of nowhere to mug you. Every fix you try drags you further off into the weeds. You stare at dumps from the instrumentation until you’re dizzy and numb, and no enlightenment occurs. Even as you are bashing your head against a wall of incomprehension, consciousness grows that when you find the solution, it will be damningly simple and you will feel utterly moronic, like you should have gotten there days ago.
Welcome to programmer hell. This is your shtoopid problem.
Yes, I have a real example. I just spent the better part of three days debugging code to close block scopes in Python, for a little tool that translates Python into a different language with explicit end brackets. Three days! Of feeling shtoooopid.
It left me wondering what it is about some apparently simple conundrums like this that repels solution. I can say that there are certain recurring patterns in shtoopid problems – when you’re experienced enough, you will notice them partway in and get that sinking oh-no-not-again feeling.
A big one is when your algorithm is such that off-by-one and fencepost errors that are easy to make and hard to spot. The defects that go with this are reversed sign on an increment, or trying to step through an array or list or code loop by bumping the wrong counter.
Another recurring shtoopidness trap is near data sentinels – that mysteriously don’t. And a third is any situation where you are mutating (especially deleting) parts of a serial data structure while iterating through it. (Some languages have mutability restrictions in loops to try to prevent this one. The Dread God Finagle can easily contrive a way for you to screw yourself despite them,)
Copy-paste code that is just slightly wrong in a way that is difficult to spot is a common source of shtoopidity. So are undetected namespace collisions. And variable shadowing. And reversed test in conditional guards.
If you ever find yourself staring at your instrumentation results and thinking “It…can’t…possibly…be…doing…that”, welcome to shtoopidland. Here’s your mallet, have fun pounding your own head. (Cue cartoon sound effects.)
Solutions to shtoopidity traps are not conceptually hard, but they’re slippery and evasive. They’re difficult to climb out of even when you know you’re in one. You’re not defeated by what you don’t know so much as by what you think you do know. It follows that the most effective way to get out of your trap is…
…instrument everything. I mean EVERYTHING, especially the places where you think you are sure what is going on. Your assumptions are your enemy; printf-equivalents are your friend. If you track every state change in the your code down to a sufficient level of detail, you will eventually have that forehead-slapping moment of why didn’t-I-see-this-sooner that is the terminal characteristic of a shtoopid problem.
September 23, 2018
On holy wars, and a plea for peace
I just posted the following to the Linux kernel mailing list.
Most of you know that I have spent more than a quarter century analyzing the folkways of the hacker culture as a historian, ethnographer, and game theorist. That analysis has had large consequences, including a degree of business and mainstream acceptance of the open source way that was difficult to even imagine when I first presented “The Cathedral and the Bazaar” back in 1997.
I’m writing now, from all of that experience and with all that perspective, about the recent flap over the new CoC and the attempt to organize a mass withdrawal of creator permissions from the kernel.
I’m going to try to keep my personal feelings about this dispute off the table, not because I don’t have any but because I think I serve us all better by speaking as neutrally as I can.
First, let me confirm that this threat has teeth. I researched the relevant law when I was founding the Open Source Initiative. In the U.S. there is case law confirming that reputational losses relating to conversion of the rights of a contributor to a GPLed project are judicable in law. I do not know the case law outside the U.S., but in countries observing the Berne Convention without the U.S.’s opt-out of the “moral rights” clause, that clause probably gives the objectors an even stronger case.
I urge that we all step back from the edge of this cliff, and I weant to suggest a basis of principle on which settlement can be negotiated.
Before I go further, let me say that I unequivocally support Linus’s decision to step aside and work on cleaning up his part of the process. If for no other reason than that the man has earned a rest.
But this leaves us with a governance crisis on top of a conflict of principles. That is a difficult combination. Fortunately, there is lots of precedent about how to solve such problems in human history. We can look back on both tragic failures and epic successes and take lessons from them that apply here.
To explain those lessons, I’m going to invite everybody to think like a game theorist for a bit.
Every group of humans trying to sustain cooperation develops an ethos, set of norms. It may be written down. More usually it is a web of agreements that one has to learn by observing the behavior of others. The norms may not even be conscious; there’s a famous result from experimental psychology that young children can play cooperative games without being able to articulate what their rules are…
Every group of cooperating humans has a telos, a mutually understood purpose towards which they are working (or playing). Again, this purpose may be unwritten and is not necessarily even conscious. But one thing is always true: the ethos derives from the telos, not the other way around. The goal precedes the instrument.
It is normal for the group ethos to evolve. It will get pulled in one direction or another as the goals of individuals and coalitions inside the group shift. In a well-functioning group the ethos tends to evolve to reward behaviors that achieve the telos more efficiently, and punish behaviors that retard progess towards it.
It is not normal for the group’s telos – which holds the whole cooperation together and underpins the ethos – to change in a significant way. Attempts to change the telos tend to be profoundly disruptive to the group, often terminally so.
Now I want you to imagine that the group can adopt any of a set of ethoi ranked by normativeness – how much behavior they require and prohibit. If the normativeness slider is set low, the group as a whole will tolerate behavior that some people in it will consider negative and offensive. If the normativeness level is set high, many effects are less visible; contributors who chafe under restriction will defect (usually quietly) and potential contributors will be deterred from joining.
If the normativeness slider starts low and is pushed high, the consequences are much more visible; you can get internal revolt against the change from people who consider the ethos to no longer serve their interests. This is especially likely if, bundled with a change in rules of procedure, there seems to be an attempt to change the telos of the group.
What can we say about where to set the slider? In general, the most successful – most inclusive – cooperations have a minimal ethos. That is, they are just as normative as they must be to achieve the telos, *and no more so*. It’s easy to see why this is. Pushing the slider too high risks internal factional strife over value conflicts. This is worse than having it set too low, where consensus is easier to maintain but you get too little control of conflict between *individuals*.
None of this is breaking news. We cooperate best when we live and let live, respecting that others may make different choices and invoking the group against bad behavior only when it disrupts cooperative success. Inclusiveness demands tolerance.
Strict ethoi are typically functional glue only for small groups at the margins of society; minority regious groups are the best-studied case. The larger and more varied your group is, the more penalty there is for trying to be too normative.
What we have now is a situation in which a subgroup within the Linux kernel’s subculture threatens destructive revolt because not only do they think the slider been pushed too high in a normative direction, but because they think the CoC is an attempt to change the group’s telos.
The first important thing to get is that this revolt is not really about any of the surface issues the CoC was written to address. It would be maximally unhelpful to accuse the anti-CoC people of being pro-sexism, or anti-minority, or whatever. Doing that can only inflame their sense that the group telos is being hijacked. They make it clear; they signed on to participate in a meritocracy with reputation rewards, and they think that is being taken way from them.
One way to process this complaint is to assert that the CoC’s new concerns are so important that the anti-CoC faction can be and should be fought to the point where they withdraw or surrender. The trouble with this way of responding is that it *is* in fact a hijacking of the group’s telos – an assertion that we ought to have new terminal values replacing old ones that the objectors think they’re defending.
So a really major question here is: what is the telos of this subculture? Does the new CoC express it? Have the objectors expressed it?
The question *not* to get hung up on is what any individual’s choice in this matter says about their attitude towards, say, historically underepresented minorities. It is perfectly consistent to be pro-tolerance and pro-inclusion while believing *this* subculture ought to be all about producing good code without regard to who is offended by the process. Not every kind of good work has to be done everywhere. Nobody demands that social-justice causes demonstrate their ability to write C.
That last paragraph may sound like I have strayed from neutrality into making a value claim, but not really. It’s just another way of saying that different groups have different teloi, and different ethoi proceeding from them. Generally speaking (that is, unless it commits actual crimes) you can only judge a group by how it fulfills its own telos, not those of others.
So we come back to two questions:
1. What is our telos?
2. Given our telos, do we have the most inclusive (least normative) ethos possible to achieve it?
When you have an answer to that question, you will know what we need to do about the CoC and the “killswitch” revolt.
Email archive thread at: https://lkml.org/lkml/2018/9/23/212
September 13, 2018
Hacker culture and the politics of process defense
In my last two blog posts, on the attempted hijack of the Lerna license and speech suppression in the Python documentation, I have both urged the hacker culture to stay out of political issues and urged what some people will interpret as “political” stance with regard to political correctness and “diversity”-driven speech demands.
The expected “gotcha!” comments that “ESR is saying hacker projects should stay clear of politics while arguing politics” have duly followed. While the way this sort of objection is usually posed barely rises above the level of a stupid rhetorical trick, there is an actual issue of principle here that deserves exploring.
The wrong way to do it would be to argue over the scope of the term “politics”. I’m going to take a different tack, starting with the observation that the hacker culture is a social machine for producing outcomes that its participants desire. Good code; working infrastructure; successful and rewarding collaborations. Artistic expression of the special kind that is master-level engineering.
Thus, hacker culture has a telos, a purpose – more precisely, a collection of linked and mingled purposes that can be considered as a unit. Achievement of those purposes depends on a rich array of processes and customs. If those processes are disrupted, the culture will cease to be able to achieve its purposes.
That would be a bad thing, and not just for hackers. Our civilization has become dependent on the infrastructure that the hacker culture invented and maintains. Damage to our culture, failure to fulfill our telos – these are no longer parochial issues. We hold up the sky and have a corresponding duty to our civilization, which is to defend our processes so we can keep doing our job.
Therefore, I propose to replace the question “What kind of politics should the hacker culture be engaged in?” with two that are sharper and more responsive to our duty. These questions are:
1. What exertions of power and influence do we need to resist in order to protect our processes, prevent our social machine from breaking down, and achieve our telos?
2. Should we, as a culture or as projects within that culture, engage in “politics” (however that is defined) beyond the issues selected by the previous question?
To arrive at a generative answer to that question, I’m going to start with two hot-button political issues that I think are at opposite ends of the threat metric implied by the first question. Those are: internet censorship, and the nature and scope of immigration controls.
Hacker culture has no more critical dependency than the free flow of information over the Internet. Only this allows us to sustain large-scale cooperation among geographically scattered individuals. We have a correspondingly strong duty to protect and extend that liberty – and not just a duty to ourselves, but to the civilization that increasingly relies on us.
On the other hand, no possible choice about immigration policy threatens our processes. It matters very little whether a hacker is sitting in New Jersey or Nigeria, and as the Internet build-out continues that geography becomes ever less important. Thus we have no duty here as members of the hacker culture; any individual position we might have is irrelevant to the hacker telos.
Not all issues are so clear cut. But before pursuing that problem further, let me address the second question. Why should we avoid political entanglements that are not clearly connected to our process and our telos?
I think I answered that one pretty clearly in my post on the Lerna flap. Political fights we don’t need to be in are internally divisive – they risk fractionating us into warring tribes, fatally damaging our ability to cooperate. That is directly against the telos. I have also written previously “You shall judge by the code alone.” That is the individual compact of mutual tolerance that is a precondition for not fractionating over politics.
I should add now that even when we face outwards rather than inwards towards each other , every fight we don’t need to be in burns up energy and social capital we need to preserve for the fights that are important.
There’s a third issue here: we benefit, in facing outwards and performing our function in civilization, from not being seen to have axes to grind. To keep the infrastructure running we benefit from having every political faction in society see us as friendly neutrals. I’d go so far as to say being perceived as friendly neutrals is tied to our telos.
OK, somebody’s going to ask how that is compatible with dealing with…say…a political tendency that is avowedly pro-censorship, like Communism? Do we have to exert ourselves to be “friendly neutrals” to Communists?
No. Because we’re even more critically dependent on free information flow than we are on being seen by outsiders as politically impartial. That kind of liberty is closer to our telos than having good PR with every totalitarian in the world.
The question that flows from the “Communist” example is not trivial, but it’s at least one that can be used as a guide to right action. The philosophical version is this: given the hacker telos as a set of related terminal values, what is the smallest set of non-terminal values we must defend? Correspondingly, in he public sphere, what political positions must we have?
Here are some obvious ones:
The right of individuals to speak as they wish to any who wish to hear them without censorship or fear of reprisals.
The right of individuals to associate and cooperate on shared projects.
The right of individuals to own and use the tools required for their creative work.
The right of individuals to form cooperative groups that themselves have speech and ownership/use rights required for their work.
Opposition to coercive control of our communications channels by anyone, whether that ‘anyone’ is a government or something else.
It actually takes a pretty stupid person to not see that hackers must defend these positions, which is why I’m not worried that they hurt our friendly-neutrals position much.
Now I’m going to pick on something much more contentious, because I think it illustrates how we should reason as hackers when the connection to our cultural telos is much less clear, and provides an example of what kind of individual political self-restraint our duty as hackers requires.
I am personally widely known to be a strong advocate of the individual right to own and bear arms; in U.S. terms, a Second Amendment absolutist. Yet, I have never argued that other hackers have a duty to embrace this cause.
It’s not because I couldn’t do so. Firearms rights are not like immigration controls, with no connection to the hacker telos. They are an individual-autonomy issue somewhat removed from the hacker telos but connected to it. To see this, ask: if society can ban civilian firearms on a consequential-harm argument, can it ban other things for the same reason? Like, say, cryptography? Or 3D printers? Or general-purpose computers?
But no. I’ve never pushed this argument in my role as hacker thought leader because I judge doing so would likely inflict net harm. The division it would sow within our community would likely do more damage than winning that argument could contribute to the defense of our telos. Thus, I refrain. I see my duty and I hold my fire.
On the other hand, consider Cody Wilson and Defense Distributed. Must hackers defend his right to distribute 3-D-printer CAD files for personal firearms? Here I think the answer is unambiguously “yes”. DD is squarely within the hacker telos in maintaining that individuals should be able to share, use, and modify these files without hindrance. The jump from censoring them to censoring putatively “dangerous” software is no jump at all. I’ll put my “ESR” mojo behind that position all day long and have no doubt whatsoever I am doing my duty to the hacker telos.
And yet, as to firearms rights in general, I continue in self-restraint for the good of our community – that is, I speak a strong position as an individual but do not claim that hacker terminal values entail it. I’m going to finish this essay by arguing for a similar sort of restraint.
Most of the political arguments roiling our waters these days have something to do with “diversity”. I’m going to stay honest here by admitting that I think far too many of the people waving this banner are totalitarian wannabes for whom it is merely pretext, with the actual goal of imposing a degree of speech and thought control that would do George Orwell’s Inner Party proud. Those people won’t be dissuaded from disrupting the hacker social machine no matter what I say, because nothing actually matters to them but the power to punish.
Some of you, though…some of you genuinely believe that the hacker culture is in desperate need of “diversity” reform. That “You shall judge by the code alone” is not enough. And to you I have this to say:
Refrain. You have failed to take account of the vast harm you are risking – that of destroying the functional neutrality and mutual tolerance that keeps our social machine running and our telos fulfilled. Perhaps “You shall judge by the code alone” is an imperfect norm, but it’s at been least pretty good at keeping us from tearing each others’ throats out for the last forty years. Identity politics, on the other hand, always reduces to a game of “which tribe is strongest and most ruthless” and thus inevitably ends in blood and tears.
As a hacker, I have a duty – to other hackers, and to the civilization we serve – of political self-restraint, to keep the hacker culture functioning. So do you.
September 12, 2018
Slaves to speech suppression are masters of nothing
Comes the news that the Python project has merged a request to partially eliminate the terms “master” and “slave” from its documentation on “diversity” grounds. Sensibly, Guido van Rossum at least refused to sever the project from uses of those terms in documentation of the underlying Unix APIs.
I wish Guido had gone further and correctly shitcanned the change request as political bullshit up with which he will not put. I will certainly do that if a similar issue is ever raised in one of my projects.
The problem here is not with the object-level issue of whether the terms “master” and “slave” might be offensive to some people. It’s with the meta-level of all such demands. Which the great comedian George Carlin once summed up neatly as follows: “Political correctness is fascism pretending to be manners.”
That is, the demand for suppression of “politically” offensive terms is never entirely or usually even mostly about reducing imputed harms. That is invariably a pretense covering a desire to make speech and thought malleable to political control. Which is why the first and every subsequent attempt at this kind of entryism needs to be kicked in the teeth, hard.
Technically Carlin was actually not quite correct. Fascism has never become quite sophisticated enough at semantic manipulation to pose as manners. He should more properly have said “Political correctness is communism pretending to be manners”; George Orwell, of course, warned us of the dangers of Newspeak through his portrait of a future communism in 1984.
But Carlin leaned left, so he used the verbal cudgel of a leftist. Credit to him, anyway, for recognizing that the “manners” tactics of his fellow leftists are, at bottom, corrosive and totalitarian. The true goal is always meta: to get you to cede them the privilege of controlling your speech and thought.
Once you get pulled onto the on the PC train, it doesn’t stop with the mere suppression of individual words. The next stage is the demand that your language affirm politically-correct lies and absurdities in public. The most obvious example of this today is the attempted proliferation of gender pronouns. There are principled cases, grounded in human sexual biology, that two or three might be too few, but at the point where activists are circulating lists of 50 or more – most of which have no predicate that can be checked by an impartial observer – the demand has crossed into absurdity.
The purpose of such absurdities is never to convey truth and increase the precision of language, but rather to jam the categories and politics of some propagandist into your head – to control your mind. It is not accidental that terms like “inclusiveness” are vague and infinitely elastic; if they were not, they would not serve the actual purpose of making you feel guilty, wrong and malleable no matter how frantically you have deformed your speech and behavior to meet the propagandist’s standards of “manners”.
The manipulation depends on you never quite recovering your balance enough to recognize that your own autonomy – your ability to think and speak as you choose – is more important than the ever-escalating demands for “manners”. The first step to liberation is realizing that. The second step is resisting their attack even if you happen to agree that an individual term (like, say, “master” or “slave”) might be construable as offensive. The meta-level matters more than the object.
The third step is realizing that the propagandists for those demands mean to do you harm. They are selling “manners”, “diversity”, “inclusiveness”, but what they mean to to do is break you into loving Big Brother – becoming the primary instrument of your own oppression, ever alert to conform to the next diktat of the Ministry of Truth as expressed by the language police.
As with individuals, so with the cultures they assemble into. These “manners” demands – like the attempt to hijack the Lerna license I condemned in my last blog post – are an attack on the autonomy and health of the hacker culture. All who cherish that culture should refuse them.
August 29, 2018
Non-discrimination is a core value of open source
Today I learned that something called the Lerna project has added a codicil to its MIT license denying the use of its software to a long list of organizations because it disagrees with a political choice those organizations have made.
Speaking as one of the original co-authors of the Open Source Definition, I state a fact. As amended, the Lerna license is no longer conformant with the OSD. It has specifically broken compliance with clause 5 (“No Discrimination Against Persons or Groups”).
Accordingly, Lerna has defected from the open-source community and should be shunned by anyone who values the health of that community. I will not contribute to their project, and will urge others not to, until and unless this change is rescinded.
We wrote Clause 5 into the OSD for a good reason. Exclusions and carve-outs like Lerna’s, if they became common, would create tremendous uncertainty about the ethics and even the legality of code re-use. Suppose I were to take a snippet from Lerna code and re-use it in a project that (possibly without my knowledge) was deployed by one of the prescribed organizations; what would my ethical and legal exposure be?
It gets worse. Suppose I write code that happened to be identical, or very similar to, portions of Lerna? Could anyone make a case that I was in violation of their license? It is definitely unsafe when a question like that turns on facts of knowledge and intent no one outside a putative violator’s skull can know for certain.
The Lerna project’s choice is, moreover, destructive of one of the deep norms that keeps the open-source community functional – keeping politics separated from our work. If we do not maintain that norm, we risk fractionating into a collection of squabbling tribes arguing particularisms and unable to sustain really large-scale cooperation.
I would consider such a disintegration not merely unpleasant but actually dangerous to civilization, which relies on us for an increasing portion of its critical infrastructure. Accordingly, we need to cooperate more, not less.
That, in turn, means that, even as we may hold strong individual opinions about issues like those motivating Lerna’s proscription list, we need to be more neutral and non-discriminatory in our collective behavior about such issues, not less.
August 22, 2018
Unix != open source
Yesterday a well-meaning hacker sent me a newly-recovered koan of Master Foo in which an angry antagonist berated Master Foo for promoting an ethic of open-source software at the expense of programmers’ livelihoods.
Alas, I knew at once that he had been misled by a forgery, or perhaps some dreadful chain of copying errors, at whatever venerable monastic library had been the site of his research. Not because the economics was wrong – Master Foo persuades the antagonist that his assumption is in error – but because the koan conflates two things that were not the same. Actually, at least three things that are not the same.
Eighteen years into the third millennium, long after the formative events of Master Foo’s time, many people fail to understand how complex and contingent the relationship between the Unix tradition and the open-source ethos actually was in the old days. Too readily we project today’s conditions backwards in a way that impedes understanding of history.
Here’s how it was…
There are at least three different things one can mean when one speaks of the practice of open source.
The first, and oldest, is an unreflective folk practice of code-sharing. At this stage people just…share code. They don’t worry about licenses because they think of the activity as one that only involves an informal peer network of consenting sharers; there’s no concept in anybody’s mind of having to defend code-sharing, or of any collision with third-party interests. Nor is there ideology about the practice, nor any name for it – there’s just custom and utility. I’ll call this “Level One” or “unreflective” open source.
Another stage is reached when people begin to reflect on open source as a practice and articulate the pragmatic advantages of it. At this point one begins to get folk theory about it – claims like “sharing code is good because it reduces wasteful duplication of effort” or “code sharing is good because other people can notice problems that the author doesn’t.” But these claims are not connected by any generative or prescriptive theory; other than technical conventions about how to pass around code, there aren’t any norms. We’re still at folk-practice here, but folk practice becoming conscious. I’ll call this “Level Two” or “emergent” open source – you have an ethos, but not yet an ethic.
A third stage is reached when prophets try to systematize a theory of open source. What’s characteristic of this stage as distinct from the others is the assertion of strong normative claims attached to an explicit theory of the consequences: “you should share code, and here is why”. Only at this point can one really speak of an open-source “ethic”. I’ll call this “Level Three” or “ideological” open source, because when you get here practice starts to change in response to the developing theory. There are manifestos, and the manifestos matter.
Historically there have been at least two competing theories of open source, one associated with Richard Stallman and the other with me. But for the purposes of this post that distinction is almost completely unimportant; only the time of arrival of Stallman’s theory (1985) and to a lesser extent mine (1998) actually matters much.
The koans of Master Foo address the Unix design tradition, which began around 1969 and reached a recognizably modern form by the early ’80s when it incorporated TCP/IP networking. Right away we can see a question here; the early formative period of Unix long predates public Level Three thinking about open source.
This is why I knew that koan had been forged or corrupted. The Level Three language in which someone could berate Master Foo for promoting an ethic of open source did not yet exist in the legendary age of the early Unix patriarchs. It is true that you can find some evidence of Level Two thinking as far back as Ferdinand Corbató’s reflections on Multics in 1963, and there are quotes in the early Unix papers out of Bell Labs that suggest it. But we don’t see actual normative claims – full Level Three – until the GNU manifesto arrives in 1985.
Until then there is no concept that code-sharing could be in conflict with programmers making a living, because nobody has proposed that it be done systematically as a replacement for closed source. Master Foo’s antagonist in that supposed koan is anachronistic for early Unix, a back-projection of later concerns. Master Foo would have understood the proposition that source code has a longer expected survival time than object blobs, but not the ethical claim.
Now, I’m not arguing that the development of the Unix tradition and the open-source ethos were completely disconnected. It is a historical fact that the Unix tradition incubated open source, and worth looking at why. I’ve written about some of this before, so the following is a reminder more than a full exposition.
You can’t have open source if you can’t port software between machines – and not just between machines of the same make, but across architectures. It was Unix that made this possibility normal by systematizing the idea that APIs and retargetable compilers could decouple source code from the machine.
The other direction of entailment doesn’t work, though. You can have Unix without open source – and until Linux most of us did, most of the time. Bootleg source tapes and things like the Lions book deepened our understanding but didn’t free us from dependence on closed-source kernels.
It seems highly unlikely that there will ever be another closed-source Unix implementation in the future; the coupling is pretty tight, now. But remember that it was not always thus.
August 3, 2018
How to get a reliable home router/WiFi box in 2018
My apprentice and A&D regular Ian Bruene had bad experiences with a cheap home router/WiFi recently, and ranted about it on a channel where I and several other comparatively expert people hang out. He wanted to know how to get a replacement solid enough to leave with non-techie relatives.
The ensuing conversation was very productive, so I’m summarizing it here as a public-service announcement. I’ve put the year in the title because some of the information in it could go stale quickly. I will try to mark each element of the advice with an expected-lifetime estimate.
Even before seeing any of the comments on this post I’m going to say you should read them too. Some of my regulars are more expert than I am about this area.
First, a couple of rules that will not age:
For each candidate product, use a search engine to find out (a) what Wi-Fi chipset it is using and (b) whether the firmware is vendor-proprietary or a spin of OpenWRT.
1. If the chipset was made by Broadcom, stay the hell away. Their Wi-Fi and wired Ethernet chips are notoriously shitty and have been for decades – I remember trouble with them as far back as early Linux days in the 1990s.
2. If the firmware is not OpenWRT, assume that you will need to re-flash the device. The quality of vendor proprietary firmware is, in general, abysmal – not just worse than you imagine, but worse than you probably can imagine. Don’t rely on it unless you like the idea of being pwned by a Bulgarian botnet.
An increasing number of vendors do the right thing and ship OpenWRT from the factory. Read this excellent post by Jim Gettys for some candidates. Specific product recommendation in it may age, but a vendor that is not crappy will probably continue to be not-crappy.
Any current OpenWRT device will be fully de-bufferbloated. You can’t rely on this from closed-source vendor firmware.
If you want to go cheaper than the $150 or so you’ll pay for a device with preinstalled OpenWRT, you probably can’t do better than buying an N600 on eBay and flashing it with OpenWRT yourself.
I call out this particular platform because:
(a) It’s cheap and easy to get, and probably has all the features you want except Gigabit Ethernet in some older revisions; check for that, the changeover seems to have been between the 3400 and 3800 revisions.
(b) I use one (a 3800) myself and can certify that it’s rock-solid stable; my last service interruption was 381 days ago and that was due to a power outage that outlasted my UPS’s dwell time.
(c) It’s based on a MIPS chipset that was Tier 1 for OpenWRT for a long time and got a lot of love from the core developers.
UPDATE: Note that the 3800 rev of the N600 is out of production but you can find it on eBay, the 3700v4 is also known good and actually has more flash capacity that the 3800.
In conclusion, my thanks and praise go out to Dave Taht and the OpenWRT team in general. They’ve fully fulfilled the promise of open source – best quality, best practices and best reliability.
July 10, 2018
The return of the servant problem
I think we all better hope we get germ-line genetic engineering and really effective nootropics real soon now. Because I think I have seen what the future looks like without these technologies, and it sucks.
A hundred years ago, 1918, marked the approximate end of the period when even middle-class families in the U.S. and Great Britain routinely had servants. During the inter-war years availability of domestic servants became an acute problem further and further up the SES scale, nearly highlighted by the National Council on Household Employment’s 1928 report on the problem. The institution of the servant class was in collapse; would-be masters were priced out of the market by rising wages for factory jobs and wider working opportunities for women (notably as typists).
But there was a supply-side factor as well; potential hires were unwilling to be servants and have masters – increasingly reluctant to be in service even when such jobs were still the best return they could get on their labor. The economic collapse of personal service coincided with an increasing rejection of the social stratification that had gone with it. Society as a whole became flatter and much more meritocratic.
There are unwelcome but powerful reasons to expect that this trend has already begun to reverse.
An early bellwether was Murray and Hernstein’s The Bell Curve in 1994; one of their central concerns was that meritocratic elevation of the brightest out of various social strata and ethnicities of poorer folks might exert a dyscultural effect, depriving their birth peers of talent and leadership. They also worried that a society increasingly run by its cognitive elites would complexify in ways that would make life progressively more difficult for those on the wrong end of the IQ bell curve, eventually driving many out of normal economic life and into crime.
What they barely touched was the implication that these trends might combine to produce increased social stratification – the bright getting richer and the dull getting poorer, driving the ends of the SES scale further apart in a self-reinforcing way.
Only a few years later social scientists began noticing that assortative mating among the new meritocratic elite was a thing. What this hints at is that meritocracy may be driving us towards a society that is not just economically but genetically stratified.
Now comes Genetic analysis of social-class mobility in five longitudinal studies, a powerful meta-analysis summarized here. The takeaway from this paper is that upward social mobility is predicted by genetics. And, as the summary notes: “[H]igher SES families tend to have higher polygenic scores on average [and thus more upward mobility] — which is what one might expect from a society that is at least somewhat meritocratic.”
Indeed, the obvious historical interpretation of this result is that this is where meritocracy got us. At the beginning of the Flat Century meritocrats had a lot of genetic outliers to uplift out of what they called the “deserving poor”; which is another way of saying that back then, the genetic potential for upward mobility was more widely distributed in lower SESes because it had not yet been selected out by the uplifters. This model is consistent with what primary sources tell us people believed about themselves and their peers.
But now it’s 2018. Poverty cultures are reaching down to unprecedented levels of self-degradation; indicators of this are out-of-wedlock births, rates of drug abuse, and levels of interpersonal violence and suicide. Even as American society as a whole is getting steadily richer, more peaceful and less crime-ridden, its lowest SES tiers are going to hell in a handbasket. And not just the usual urban minority suspects, either, but poor whites as well; this is the burden of books like Charles Murray’s Coming Apart. J. D. Vance’s Hillbilly Elegy, and the opioid-abuse statistics.
It’s hard not to look at this and not see the prophecies of The Bell Curve, a quarter century ago, coming hideously true. We have assorted ourselves into increasing cognitive inequality by class. and the poor are paying an ever heavier price for this. Furthermore, the natural outcome of the process is average IQ and other class differentiating abilities abilities are on their way to becoming genetically locked in.
The last jaw of the trap is the implosion of jobs for unskilled and semi-skilled labor. Retail, a traditional entry ramp into the workforce, has been badly hit by e-commerce, and that’s going to get worse. Fast-food chains are automating as fast as political morons pass “living wage” laws; that’s going to have an especially hard impact on minorities.
But we ain’t seen nothing yet; there’s a huge disruption coming when driverless cars and trucks wipe out an entire tier of the economy related to commercial transport. That’s 1 in 15 workers in the U.S., overwhelmingly from lower SES tiers. What are they going to do in the brave new world? What are their increasingly genetically disadvantaged children going to do?
Here’s where we jump into science fiction, because the only answer I can see is: become servants. And that is how the Flat Century dies. Upstairs, downstairs isn’t just our past, it’s our future. Because in a world where production of goods and routinized service is increasingly dominated by robots and AI, the social role of servant as a person who takes orders will increasingly be the only thing that an unskilled person has left to offer above the economic level of digging ditches or picking fruit.
I fear that with the reappearance of a servant class the wonderful egalitarianism of the America we have known will fade, to be replaced by a much more hierarchical and status-bound order. Victorian homilies about knowing your place will once again describe a sound adaptive strategy. The rich will live in mansions again, because the live-in help has to sleep somewhere…
This prospect disgusts me; I’m a child of the Flat Century, a libertarian. But I’ve been increasingly seeing it as inevitable, and the genetic analysis I previously cited has tipped me over into writing about it.
Some people who seem dimly to apprehend what’s coming are talking up universal basic income as a solution. This is the long-term idiocy corresponding exactly to the short-term idiocy of the $15-an-hour-or-fight campaigners. UBI would be a trap, not a solution, and in any case has the usual problem of schemes that rely on other peoples’ money – as the demands of the clients increase you run out of it, and what then?
There is only one way out of this, and that’s science-fictional too. We’ll need to figure out how to fight the economic and genetic drift towards an ability-stratified society by intervening at the root causes. Drugs to make people smarter; germ-line manipulation to make their kids brighter. If we can narrow the cognitive-ability spread enough, the economic forces driving increasing divergence between upper and lower SES will abate.
There’s a good novel in this scenario, I think. Thirty years from now in a neo-Victorian U.S. full of manors, a breakthrough discovery in intelligence amplification gets made. Human nature being what it is, evil people who like their place at the top of a pecking order – and good people who fear destabilization of society – will want o suppress and control it. What comes next?
In the real world, I don’t want to be living in that novel at age 90; it would be a miserable place for too many, heavy with resentment and curdled dreams. So let’s get on that technical problem; intelligence increase now, dammit!
July 5, 2018
A Century of Findings on Intellectual Precocity: some highlights
The paper From Terman to Today: A Century of Findings on Intellectual Precocity does a lot of mythbusting.
The recently popular notion that IQ > 120 has little incremental utility is dead false. Even small differences in IQ predict significant differences in creative output and odds of having a top-tier income.
Gifted children (and adults) are not fragile creatures with chronic emotional problems, they are “highly effective and resilient” individuals.
Not stated, but implied: IQ measurement in the upper ranges (above 137) is measuring something precisely enough to justify real-world predictions that differ significantly even over single-digit spans.
Not stated, but implied: Multifactor theories of intelligence are bunk. To a good first approximation there is only g. Otherwise the shapes of the bottom four outcome curves in Figure 3 would have to be more divergent than they are.
“g, fluid reasoning ability, general intelligence, general mental ability, and IQ essentially denote the same overarching construct”
“if graduation from college were based on demonstrated knowledge rather than time in the educational system, a full 15% of the entering freshmen class would be deemed ready to graduate.”
“Failure to provide for differences among students is perhaps the greatest source
of inefficiency in education.”
“Overall, there does not seem to be an ability threshold [even] within the top 1% beyond which more ability does not matter.”
A marked characteristic of the profoundly gifted is “willingness to work long hours.”
July 3, 2018
Survival mode
I spent 20 minutes under general anesthesia this morning, and had an odd memory afterwards.
It was nothing serious – my first screening colonoscopy, things looked OK, come back in five years – but I hadn’t been under general anesthesia in 40 years (since having molars removed as a teen) and I was self-monitoring carefully.
When I came out of it, I brought with two memories. One was that I had been aware of people talking around me. The anesthesiologist had told me that might happen, and I wouldn’t have been surprised by it anyway; I’ve read of that effect.
This tells you human beings are really social animals – so much so that we’re partially alert to people-talk even when we’re knocked out. After all (gasp!) our status might change…
The odd thing I surfaced with was memory of some mental processing I’d been doing while unconscious. It appears my brain was running in a sort of survival-alert loop, constantly evaluating whether it could hear or feel or smell danger cues sufficient to wake me up. What I remembered was the operating noise of that loop running.
Of course it makes complete evolutionary sense that we’d have a mechanism like that. And we behave like we do, too; unfamiliar noises wake us up from sleep, familiar ones don’t. There’s got to be some neural processing going on evaluating familiarity.
What is odd is that I’ve never heard or read of anyone else remembering that operating noise. I know of no term of art for it in science, nor any match to it in the literature of mystical introspection. It’s not the free-associative (“drunken monkey”) chatter beneath normal consciousness, but something much leaner and more task-focused: “Wake the boss? Wake the boss? Wake the boss?…”
I suppose it’s just barely possible I’m the first to both keep the memory and write about it – not many people have been experimental mystics for forty years and have my ability to self-monitor, and maybe there’s something about particular kinds of anesthesia that makes it easier not to lose continuity of consciousness than waking from normal sleep.
Still, this seems unlikely. One would think there’d been enough mystics in auto accidents by now to collect reports on post-operative recovery that would include memories like this.
Can any of my readers point at something relevant?
Eric S. Raymond's Blog
- Eric S. Raymond's profile
- 140 followers
