Eric S. Raymond's Blog, page 45
April 4, 2014
Pushing back against the bullies
When I heard that Brendan Eich had been forced to resign his new job as CEO at Mozilla, my first thought was “Congratulations, gay activists. You have become the bullies you hate.”
On reflection, I think the appalling display of political thuggery we’ve just witnessed demands a more muscular response. Eich was forced out for donating $1000 to an anti-gay-marriage initiative? Then I think it is now the duty of every friend of free speech and every enemy of political bullying to pledge not only to donate $1000 to the next anti-gay-marriage initiative to come along, but to say publicly that they have done so as a protest against bullying.
This is my statement that I am doing so. I hope others will join me.
It is irrelevant whether we approve of gay marriage or not. The point here is that bullying must have consequences that deter the bullies, or we will get more of it. We must let these thugs know that they have sown dragon’s teeth, defeating themselves. Only in this way can we head off future abuses of similar kind.
And while I’m at it – shame on you, Mozilla, for knuckling under. I’ll switch to Chrome over this, if it’s not totally unusable.
April 3, 2014
Zero Marginal Thinking: Jeremy Rifkin gets it all wrong
A note from the publisher says Jeremy Rifkin himself asked them to ship me a copy of his latest book, The Zero Marginal Cost Society. It’s obvious why: in writing about the economics of open-source software, he thinks I provided one of the paradigmatic cases of what he wants to write about – the displacement of markets in scarce goods by zero-marginal-cost production. Rifkin’s book is an extended argument that this is is a rising trend which will soon obsolesce not just capitalism as we have known it, but many forms of private property as well.
Alas for Mr. Rifkin, my analysis of how zero-marginal-cost reproduction transforms the economics of software also informs me why that logic doesn’t obtain for almost any other kind of good – why, in fact, his general thesis is utterly ridiculous. But plain common sense refutes it just as well.
Here is basic production economics: the cost of a good can be divided into two parts. The first is the setup cost – the cost of assembling the people and tools to make the first copy. The second is the incremental – or, in a slight abuse of terminology, “marginal” – cost of producing unit N+1 after you have produced the first copy.
In a free market, normal competitive pressure pushes the price of a good towards its marginal cost. It doesn’t get there immediately, because manufacturers need to recoup their setup costs. It can’t stay below marginal cost, because if it did that the manufacturer loses money on every sale and the business crashes.
In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods – software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.
In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own costly instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.
Fifteen years ago I pointed out in my paper The Magic Cauldron that the pricing models for most proprietary software are economically insane. If you price software as though it were (say) consumer electronics, you either have to stiff your customers or go broke, because the fixed lump of money from each unit sale will always be overrun by the perpetually-rising costs of technical support, fixes, and upgrades.
I said “most” because there are some kinds of software products that are short-lived and have next to no service requirements; computer games are the obvious case. But if you follow out the logic, the sane thing to do for almost any other kind of software usually turns out to be to give away the product and sell support contracts. I was arguing this because it knocks most of the economic props out from under software secrecy. If you can sell support contracts at all, your ability to do so is very little affected by whether the product is open-source or closed – and there are substantial advantages to being open.
Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different ways, both of which bear on the premises of his book.
First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.
Second, even in software – with those strong secondary markets – open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.
There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true – which is exactly why capitalists can make a lot of money by substituting capital goods for labor.
These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.
An even more basic refutation of Rifkin is: food. Most of the factors of production that bring (say) an ear of corn to your table have a cost floor well above zero. Even just the transportation infrastructure required to get your ear of corn from farm to table requires trillions of dollars of capital goods. Atoms are heavy. Not even “near-zero” marginal cost will ever happen here, let alone zero. (Late in the book, Rifkin argues for a packetized “transportation Internet” – a good idea in its own terms, but not a solution because atoms will still be heavy.)
It is essential to Rifkin’s argument that constantly he fudges the distinction between “zero” and “near zero” in marginal costs. Not only does he wish away capital expenditure, he tries to seduce his readers into believing that “near” can always be made negligible. Most generally, Rifkin’s take on production economics calls to mind the famous Orwell quote: “One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.”
But even putting all those mistakes aside, there is another refutation of Rifkin. In his brave impossible new world of zero marginal costs for goods, who is going to fix your plumbing? If Rifkin tries to negotiate price with a plumber on the assumption that the plumber’s hours after zero have zero marginal cost, he’ll be in for a rude awakening.
The book is full of other errors large and small. The particular offence for which I knew Rifkin before this book – wrong-headed attempts to apply the laws of thermodynamics to support his desired conclusions – reappears here. As usual, he ignores the difference between thermodynamically closed systems (which must experience an overall increase in entropy) and thermodynamically open systems in which a part we are interested in (such as the Earth’s biosphere, or an economy) can be counter-entropic by internalizing energy from elsewhere into increased order. This is why and how life exists.
Another very basic error is Rifkin’s failure to really grasp the most important function of private property. He presents it as only as a store of value and a convenience for organizing trade, one that accordingly becomes less necessary as marginal costs go towards zero. But even if atoms were weightless and human attention free, property would still function as a definition of the sphere within which the owner’s choices are not interfered with. The most important thing about owning land (or any rivalrous good, clear down to your toothbrush) isn’t that you can sell it, but that you can refuse intrusions by other people who want to rivalrously use it. When Rifkin notices this at all, he thinks it’s a bad thing.
The book is a blitz of trend-speak. Thomas Kuhn! The Internet of Things! 3D printing! Open source! Big data! Prosumers! But underneath the glossy surface are gaping holes in the logic. And the errors follow a tiresomely familiar pattern. What Rifkin is actually retailing, whether he consciously understands it that way or not (and he may not), is warmed-over Marxism – hostility to private property, capital, and markets perpetually seeking a rationalization. The only innovation here is that for the labor theory of value he has substituted a post-labor theory of zero value that is even more obviously wrong than Marx’s.
All the indicia of cod-Marxism are present. False identification of capitalism with vertical integration and industrial centralization: check. Attempts to gin up some sort of an opposition between voluntary but non-monetized collaboration and voluntary monetized trade: check. Valorizing nifty little local cooperatives as though they actually scaled up: check. Writing about human supercooperative behavior as though it falsifies classical and neoclassical economics: check. At times in this book it’s almost as though Rifkin is walking by a checklist of dimwitted cliches, ringing them like bells in a carillon.
Perhaps the most serious error, ultimately, is the way Rifkin abuses the notion of “the commons”. This has a lot of personal weight for me, because I have lived in and helped construct a hacker culture that maintains a huge software commons and continually pushes for open, non-proprietary infrastructure. I have experienced, recorded, and in some ways helped create the elaborate network of manifestos, practices, expectations, how-to documents, institutions, and folk stories that sustains this commons. I think I can fairly claim to have made the case for open infrastructure as forcefully and effectively as anyone who has ever tried to.
Bluntly put, I have spent more than thirty years actually doing what Rifkin is glibly intellectualizing about. From that experience, I say this: the concept of “the commons” is not a magic wand that banishes questions about self-determination, power relationships, and the perils of majoritarianism. Nor is it a universal solvent against actual scarcity problems. Maintaining a commons, in practice, requires more scrupulousness about boundaries and respect for individual autonomy rather than less. Because if you can’t work out how to maximize long-run individual and joint utility at the same time, your commons will not work – it will fly apart.
Though I participate in a huge commons and constantly seek to extend it, I seldom speak of it in those terms. I refrain because I find utopian happy-talk about “the commons” repellent. It strikes me as at best naive and at at worst quite sinister – a gauzy veil wrapped around clapped-out collectivist ideologizing, and/or an attempt to sweep the question of who actually calls the shots under the rug.
In the open-source community, all our “commons” behavior ultimately reduces to decisions by individuals, the most basic one being “participate this week/day/hour, or not?” We know that it cannot be otherwise. Each participant is fiercely protective of the right of all others to participate only voluntarily and on terms of their own choosing. Nobody ever says that “the commons” requires behavior that individuals themselves would not freely choose, and if anyone ever tried to do so they would be driven out with scorn. The opposition Rifkin wants to find between Lockean individualism and collaboration does not actually exist, and cannot.
Most of us also understand, nowadays, that attempts to drive an ideological wedge between our commons and “the market” are wrong on every level. Our commons is in fact a reputation market – one that doesn’t happen to be monetized, but which has all the classical behaviors, equilibria, and discovery problems of the markets economists usually study. It exists not in opposition to monetized trade, free markets, and private property, but in productive harmony with all three.
Rifkin will not have this, because for the narrative he wants these constructions must conflict with each other. To step away from software for an instructive example of how this blinds him, the way Rifkin analyzes the trend towards automobile sharing is perfectly symptomatic.
He tells a framing story in which individual automobile ownership has been a central tool and symbol of individual autonomy (true enough), then proposes that the trend towards car-sharing is therefore necessarily a willing surrender of autonomy. The actual fact – that car-sharing is popular mainly in urban areas because it allows city-dwellers to buy more mobility and autonomy at a lower capital cost – escapes him.
Car sharers are not abandoning private property, they’re buying a service that prices personal cars out of some kinds of markets. Because Rifkin is all caught up in his own commons rhetoric, he doesn’t get this and will underestimate what it takes for car sharing to spread out of cities to less densely populated areas where it has a higher discovery and coordination cost (and the incremental value of individual car ownership is thus higher).
The places where open source (or any other kind of collaborative culture) clashes with what Rifkin labels “capitalism” are precisely those where free markets have been suppressed or sabotaged by monopolists and would-be monopolists. In the case of car-sharing, that’s taxi companies. For open source, it’s Microsoft, Apple, the MPAA/RIAA and the rest of the big-media cartel, and the telecoms oligopoly. Generally there is explicit or implicit government market-rigging in play behind these – which is why talking up “the commons” can be dangerous, tending to actually legitimize such political power grabs.
It is probably beyond hope that Jeremy Rifkin himself will ever understand this. I write to make it clear to others that he cannot recruit the successes of open-source software for the anti-market case he is trying to make. His grasp of who we are, his understanding of how to make a “commons” function at scale, and his comprehension of economics in general are all fatally deficient.
March 31, 2014
Hackers and anonymity: some evidence
When I have to explain how real hackers differ from various ignorant media stereotypes about us, I’ve found that one of the easiest differences to explain is transparency vs. anonymity. Non-techies readily grasp the difference between showing pride in your work by attaching your real name to it versus hiding behind a concealing handle. They get what this implies about the surrounding subcultures – honesty vs. furtiveness, accountability vs. shadiness.
One of my regular commenters is in the small minority of hackers who regularly uses a concealing handle. Because he pushed back against my assertion that this is unusual, counter-normative behavior, I set a bit that I should keep an eye out for evidence that would support a frequency estimate. And I’ve found some.
Recently I’ve been doing reconstructive archeology on the history of Emacs, the goal being to produce a clean git repository for browsing of the entire history (yes, this will become the official repo after 24.4 ships). This is a near-unique resource in a lot of ways.
One of the ways is the sheer length of time the project has been active. I do not know of any other open-source project with a continuous revision history back to 1985! The size of the contributor base is also exceptionally large, though not uniquely so – no fewer than 574 distinct committers. And, while it is not clear how to measure centrality, there is little doubt that Emacs remains one of the hacker community’s flagship projects.
This morning I was doing some minor polishing of the Emacs metadata – fixing up minor crud like encoding errors in committer names – and I made a list of names that didn’t appear to map to an identifiable human being. I found eight, of which two are role-based aliases – one for a dev group account, one for a build engine. That left six unidentified individual contributors (I actually shipped 8 to the emacs-devel list, but two more turned out to be readily identifiable within a few minutes after that).
I’m looking at this list of names, and I thought “Aha! Handle frequency estimation!”
That’s a frequency of just about exactly 1% for IDs that could plausibly be described as concealing handles in commit logs. That’s pretty low, and a robust difference from the cracker underground in which 99% use concealing handles. And it’s especially impressive considering the size and time depth of the sample.
And at that, this may be an overestimate. As many as three of those IDs look like they might actually be display handles – habitual nicknames that aren’t intended as disguise. That is a relatively common behavior with a very different meaning.
March 29, 2014
Ugliest…repository…conversion…ever
Blogging has been light lately because I’ve been up to my ears in reposurgeon’s most serious challenge ever. Read on for a description of the ugliest heap of version-control rubble you are ever likely to encounter, what I’m doing to fix it, and why you do in fact care – because I’m rescuing the history of one of the defining artifacts of the hacker culture.
Imagine a version-control history going back to 1985 – yes, twenty-nine years of continuous development by no fewer than 579 people. Imagine geologic strata left by no fewer than five version-control systems – RCS, CVS, Arch, bzr, and git. The older portions of the history are a mess, with incomplete changeset coalescence in the formerly-CVS parts and crap like paths prefixed with “=” to mark RCS masters of deleted files. There are hundreds of dead tags and dozens of dead branches. Comments and changelogs are rife with commit-reference cookies that no longer make sense in the view through more modern version-control systems.
Your present view of the history is a sort of two-headed monster. The official master is in bzr, but because of some strange deficiences in bzr’s export tools (which won’t be fixed because bzr is moribund) you have to work from a poor-quality read-only git mirror that gets automatically rebuilt from the bzr history every 15 minutes. But you can’t entirely ignore the bzr master; you have to write custom code to data-mine it for bzr-related metadata that you need for fixing references in your conversion.
Because bzr is moribund, your mission is to produce a full standalone git conversion that doesn’t suck. Criteria for “not sucking” include (a) complete changeset coalescence in the RCS and CVS parts, (b) fixing up CVS and bzr commit references so a human being browsing through git can actually follow them, (c) making sense out of the mess that is RCS deletions in the oldest part of the history.
Also, because the main repo is such a disaster area, there is at least one satellite repo for a Mac OS X port that really wants to be a branch of the main repo, but isn’t. (Instead it’s a two-tailed mutant clone of a nine-year old version of the main repo.) You’ve been asked to pull off a cross-repository history graft so that after conversion day it will look as though the whole nine years of OS X port history has been a branch in this repo from the beginning.
Just to put the cherry on top, your customers – the project dev group – are a notoriously crusty lot who, on the whole, do not go out of their way to be helpful. If not for a perhaps surprising degree of support from the project lead the full git conversion wouldn’t be happening at all. Fortunately, the lead groks it is important in order to lower the barrier to entry for new talent.
I have been working hard on this conversion for eight solid weeks. Supporting it has required that I write several major new features in reposurgeon, including a macro facility, large extensions to the selection-set sublanguage, and facilities for generic search-and-replace on both metadata and blobs.
Experiments and debugging are a pain in the ass because the repository is so big and gnarly that a single full conversion run takes around ten hours. The lift script is over 800 lines of complex reposurgeon commands – and that’s not counting the six auxiliary scripts used to audit and generate parts of it, nor an included file of mechanically-generated commands that is over two thousand lines long.
You might very well wonder what could make a repository conversion worth that kind of investment of time and effort. That’s a good question, and one of those for which you either have enough cultural context that a one-word answer will suffice or else hundreds of words of explanation wouldn’t be enough.
The one word is: Emacs.
March 28, 2014
How should cvs-fast-export be properly ignorant?
I just shipped version 1,10 of cvs-fast-export with a new feature: it now emits fast-import files that contain CVS’s default ignore patterns. This is a request for help from people who know CVS better than I do.
I’ve written before about the difference between literal and literary repository translations. When I write translation tools, one of my goals is for the experience of using the converted repository to as though the target system had been in use all along. Notably, if the target system has changesets, a dumb file-oriented conversion from CVS just isn’t good enough.
Another goal is for the transition to be seamless; that is, without actually looking for it, a developer browsing the history should not need to be aware of when the transition happened. This implies that the ignore patterns of the old repository should be emulated in the new one – no object files (for example) suddenly appearing under git status when they were invisible under CVS.
There is one subtle point I’m not sure of, though. and I would appreciate correction from anyone who knows CVS well enough to say. If you specify a .cvsignore, does it add to the default ignore patterns or replace them?
My current assumption in 1.10 is that it adds to them. If someone corrects me on this, I’ll remove a small anount of code and ship 1.11.
March 22, 2014
All the Tropes That Are My Life
Sometimes art imitates life. Sometimes life imitates art. So, for your dubious biographical pleasure, here is my life in tropes. Warning: the TV Tropes site is addictive; beware of chasing links lest it eat the rest of your day. Or several days.
First, a trope disclaimer: I am not the Eric Raymond from Jem. As any fule kno, I am not a power-hungry Corrupt Corporate Executive; if I were going to be evil, it would definitely be as a power-hungry Mad Scientist. Learn the difference; know the difference!
I am, or I like to think of myself as, a Smart Guy who Minored in Badass. I show some tendency towards Boisterous Bruiser, having slightly too physical a presence to fit Badass Bookworm perfectly. And yes, I’m a Playful Hacker who is Proud To Be A Geek.
I Like Swords, but I’m actually a Musketeer who cheerfully uses firearms. (I have also been known to wield the dreaded Epic Flail).
It is certainly the case that I married the required Fiery Redhead. I own a Badass Longcoat but don’t wear it often, as it’s heavy and not very comfortable.
Some people think I’m a Glory Seeker, but in fact I don’t much like being famous; one of the few things I genuinely fear is Becoming the Mask.
March 16, 2014
Defending Andrew Auernheimer
There’s a documentary, The Hedgehog and the Hare, being made about the prosecution of Andrew Auernheimer (aka “the weev”). The filmmaker wants to interview me for background and context on the hacker culture. The following is a lightly edited version of the backgrounder I sent him so he could better prepare for the interview.
I’ve watched the trailer. I’ve googled “weev” and read up on his behavior and the legal case. The following note is intended to be a background on culture, philosophy, and terminology that will help you frame questions for the face-to-face interview.
Wikipedia describes Andrew Auernheimer as “grey-hat hacker”. There are a lot of complications and implications around that term that bear directly on what “weev” was doing and what he thought he was doing. One good way to approach these is to survey the complicated history of the word “hacker”.
My authority to explain this rests on having edited The New Hacker’s Dictionary, which is generally considered the definitive lexicon of the culture it describes; also How To Become A Hacker which you
should probably read first.
In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.
Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)
People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.
Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.
There are – seriously – people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet – they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.
The first key thing for you to understand is that Andrew Auernheimer is not a member of the (genuine, old school, white-hat) hacker culture. One indicator of this is that he uses a concealing handle. Real hackers do not do this. We are proud of our work and do it in the open; when we use handles, they are display behaviors rather than cloaks. (There are limited exceptions for dealing with extremely repressive and totalitarian governments, when concealment might be a survival necessity.)
Another bright-line test for “hacker culture” is whether you’ve ever contributed code to an open-source project. It does not appear that Auernheimer has done this. He’s not known among us for it, anyway.
A third behavior that distances Auernheimer from the hacker culture is his penchant for destructive trolling. While there is a definite merry-prankster streak in hacker culture, trolling and nastiness are frowned upon. Our pranking style tends more towards the celebration of cleverness through elaborate but harmless practical jokes, intricate technical satires, and playful surrealism. Think Ken Kesey rather than Marquis de Sade.
Now we come to the reason why Auernheimer calls himself a hacker.
There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.
There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors – sometimes superficially, sometimes deeply and genuinely.
Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.
Andrew Auernheimer lives in that gray zone. He’s one of its ambiguous characters – part chaotic prankster, part sincere hacktivist, possibly part criminal. The proportions are not clear to me – and may not even be clear to him.
Like many people in that zone, he aspires to the condition of hacker and may sincerely believe he’s achieved it (his first lines in your trailer suggest that). What he probably doesn’t get is that attitude isn’t enough; you have to have competence. A real hacker would reply, skeptically “Show me your code.” Show your work. What have you built, exactly? Nasty pranking and security-breaking don’t count…
Now, having explained what separates “weev” from the hacker culture, I’m going to explain why his claim is not entirely bogus. I can’t consider him a hacker on the evidence I have available, but I’m certain he’s had hacker role models. Plausibly one of them might be me…
His stubborn libertarian streak, his insistence that you can only confirm your rights by testing their boundaries, is like us. So is his belief in the propaganda of the deed – of acting transgressively out of principle as an example to others.
Combine this with a specific interest in changing the world through adroit application of technology and you have someone who is in significant ways very much like us. I think his claim to be a hacker is mistaken and shows ignorance of the full weight and responsibilities of the term, but it’s not crazy. If he wrote code and dropped the silly handle and gave up trolling he might become one of us.
But even though Andrew Auernheimer doesn’t truly seem to be one of us, we don’t have much option but to join in his defense. He’s a shady and dubious character by our standards, but we are all too aware that the kind of vague law and prosecutorial overreach that threw him in jail could be turned against us for doing things that are normal parts of our work.
Sometimes maintaining civil liberties requires rallying around people whose behavior and ethics are questionable. That, I think, sums up how most hackers who are aware of his troubles feel about Andrew Auernheimer.
March 13, 2014
Country-music hell and fake accents
A few months back I had to do a two-hour road trip with A&D regular Susan Sons, aka HedgeMage, who is an interesting and estimable person in almost all ways except that she actually … likes … country music.
I tried to be stoic when stupid syrupy goo began pouring out of the car radio, but I didn’t do a good enough job of hiding my discomfort to prevent her from noticing within three minutes flat. “If I leave this on,” she observed accurately to the 11-year-old in the back seat, “Eric is going to go insane.”
Since said 11-year more or less required music to prevent him from becoming hideously bored and restive, all three of us were caught between two fires. Susan, ever the pragmatist, went looking through her repertoire for pieces I would find relatively inoffensive.
After a while this turned into a sort of calibration exercise – she’d put something on, assay my reaction to see where in the range it fell between mere spasmodic twitching and piteous pleas to make it stop, and try to figure what the actual drive-Eric-insane factors in the piece were.
After a while a curious and interesting pattern emerged…
I already knew of having some preferences in this domain. I dislike anything with steel guitars in it; conversely, I am less repelled by and can sometimes even enjoy subgenres like bluegrass, fiddle music and Texas swing that are centered on other instruments. I find old-style country, closer to its Irish traditional roots, far easier to take than the modern Nashville sound. Blues influence also helps.
But it turns out that most of these preferences are strongly correlated with one very simple binary-valued property, something Susan had the domain knowledge to identify consciously after a sufficient sample but I did not.
It turns out that what I hate above all else about country music is singers with faked accents.
I had no idea, but there’s a lot of this going around, apparently. The rules of the modern country idiom require performers who don’t naturally speak with a thick Southern-rural accent to affect one when they sing. The breakthrough moment when we figured out that this was what was making me want to chew my own leg off to escape it was when she cued up a song by some guy named Clint Black who really natively has that accent. We discovered that even though he plays the modern Nashville sound, the result only makes me feel mildly uncomfortable, as opposed to tortured.
The first interesting thing about this is that I was completely unaware that I had been reacting to the fake/nonfake distinction. But once we recognized it, the entire pattern of my subgenre preferences made sense. Duh, of course I’d have had less unpleasant experiences with styles that are less vocal-centered. And, in general, the longer ago a piece of country music was recorded, the more likely that the singers’ accents were genuine.
I think it is even quite likely that I acquired a conditioned dislike of steel guitars precisely because they are strongly co-morbid with fake accents.
It is not news that there is something distinctly unusual about the way I acquire and process language phonology: recently, for example, I wrote about having absorbed the phonology of German even though I don’t speak it, and I have previously noted the fact that I pick up speech accents very quickly on immersion (sometimes without intending to).
But this only raises more questions that belong under the “brains are weird” category. One group: what in the heck is my recognition algorithm for “fake accent”? How did I learn one? Why did I learn one? What in the hell does my unconscious mind find useful about this?
A second is: how reliable is it? We think, from Susan’s sample of a couple dozen tracks, that it’s pretty robust, at least relative to her knowledge about singer idiolects. But in a controlled experiment in which I was trying to spot fakes, how much better would I do than chance? What would my rates of false negatives and false positives be? The question is trickier than it might appear; conscious attempts to run the fake-accent recognizer might interfere with it.
The third, and in some ways the most interesting: How did my fake-accent recognizer get tangled up with my response to music? They do communicate (nobody doubts that people with good pitch discrimination have an advantage in acquiring tonal languages) but they’re different brain subsystems; the organ of Broca doesn’t do music.
Does anyone in my audience know of research that might bear on these questions?
March 8, 2014
Which way is north on your new planet?
So, here you are in your starship, happily settling into orbit around an Earthlike world you intend to survey for colonization. You start mapping, and are immediately presented with a small but vexing question: which rotational pole should you designate as ‘North’?
There are a surprisingly large number of ways one could answer this question. I shall wander through them in this essay, which is really about the linguistic and emotive significance of compass-direction words as humans use them. Then I shall suggest a pragmatic resolution.
First and most obviously, there’s magnetic north. Our assumption ‘the planet is Earthlike’ entails a nice strong magnetic field to keep local carbon-based lifeforms from getting constantly mutated into B-movie monsters by incoming charged particles. Magnetic north is probably going to be much closer to one pole than the other; we could call that ‘North’.
Then there’s spin-axis north. This is the assignment that makes north relate to the planet’s rotation the same way it does on Earth – that is, it implies the sun setting in the west rather than the east. Not necessarily the same as magnetic north; I don’t know of any reason to think planetary magnetic fields have a preferred relationship to the spin axis.
Next, galactic north. Earth’s orbital plane is inclined about 26% from the rotational plane of the Milky Way, which defines the Galaxy’s spin-axis directions; these have been labeled ‘Galactic North” and “Galactic South” in accordance with the Earth rotational poles they most closely match. On our new planet we could flip this around and define planetary North so it matches Galactic North.
Finally there’s habitability north. This one is fuzzier. More than 3/4ths of earth’s population lives in places where north is colder and south is warmer. We might want to choose ‘North’ to preserve that relationship, which is embedded pretty deeply in the language and folklore of most of Earth’s cultures. Thus, ‘North’ should be the hemisphere with the most habitable land. (Or, if you’re taking a shorter-term view, the hemisphere in which you drop your first settlement. But let’s ignore that complication for now.)
If all four criteria coincide, happiness. But how likely is that? They’re probably distributed randomly with respect to each other, which means we’ll probably get perfect agreement on only one in every sixteen exoplanets.
But not all these criteria are equally important. Magnetic North really only matters to geophysicists and compass-makers. Galactic North is probably interesting only to stargazers.
I think we have a clear winner if spin-axis north coincides with habitability north. This choice will preserve continuity of language pretty well. If they’re opposite, and galactic north coincides with magnetic north, that’s a tiebreaker. If the tiebreakers don’t settle it, I’d go with spin-axis north.
But reasonable people could differ on this. Discuss; maybe we could submit a proposal to the IAU.
March 5, 2014
Causes and implications of the pause
That is the title of a paper attempting to explain (away) the 17-year nothing that happened while CAGW models were predicting warming driven by increasing CO2. CO2 increased. Measured GAT did not.
Here’s the money quote: “The most recent climate model simulations used in the AR5 indicate that the warming stagnation since 1998 is no longer consistent with model projections even at the 2% confidence level.”
That is an establishment climatologist’s cautious scientist-speak for “The IPCC’s anthropogenic-global-warming models are fatally broken. Kaput. Busted.”
I told you so. I told you so. I told you so!
I even predicted it would happen this year, yesterday on my Ask Me Anything on Slashdot. This wasn’t actually brave of me: the Economist noticed that the GAT trend was about to fall to worse than 5% fit to the IPCC models six months ago.
Here is my next prediction – and remember, I have been consistently right about these. The next phase of the comedy will feature increasingly frantic attempts to bolt epicycles onto the models. These epicycles will have names like “ENSO”, “standing wave” and “Atlantic Oscillation”.
All these attempts will fail, both predictively and retrodictively. It’s junk science all the way down.
Eric S. Raymond's Blog
- Eric S. Raymond's profile
- 140 followers
