Eric S. Raymond's Blog, page 52
April 15, 2013
Destroying the middle ground
Here’s a thought experiment for you. Imagine yourself in an alternate United States where the First Amendment is not as a matter of settled law considered to bar Federal and State governments from almost all interference in free speech. This is less unlikely than it might sound; the modern, rather absolutist interpretation of free-speech liberties did not take form until the early 20th century.
In this alternate America, there are many and bitter arguments about the extent of free-speech rights. The ground of dispute is to what extent the instruments of political and cultural speech (printing presses, radios, telephones, copying machines, computers) should be regulated by government so that use of these instruments does not promote violence, assist criminal enterprises, and disrupt public order.
The weight of history and culture is largely on the pro-free-speech side – the Constitution does say “Congress shall make no law … prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press”. And until the late 1960s there is little actual attempt to control speech instruments.
Then, in 1968, after a series of horrific crimes and assassinations inspired by inflammatory anti-establishment political propaganda, some politicians, prominent celebrities, and public intellectuals launch a “speech control” movement. They wave away all comparisons to Nazi Germany and Soviet Russia, insisting that their goal is not totalitarian control but only the prevention of the most egregious abuses in the public square.
So strong is public revulsion against the violence of 1968 that the first prohibition on speech instruments passes rapidly. The dissidents used slow, inexpensive hand-cranked mimeograph machines and hand presses to spread their poison; these “Saturday Night Specials” are banned. Slightly more capable printers still inexpensive enough to be owned by individual citizens are made subject to mandatory registration.
A few civil libertarians call out warnings but are dismissed as extremists and generally ignored. Legitimate media and publishing corporations, assured by speech-control activists that their presses will not be affected by any measures the speech-control movement has in mind, raise little protest themselves.
Strangely, the ban on Saturday Night Specials fails to reduce the ills it was intended to address. Violent dissidents and criminals, it seems, find little difficulty in stealing typewriters, copiers, and more expensive printing equipment – none of it subject to registration.
The speech-control movement insists that stricter laws regulating speech instruments are the answer. By about 1970 convicted felons are prohibited from owning typewriters. A few years later all dealers in printing supplies, telephones, radios, and other communication equipment are required to have federal licenses as a condition of business, and are subject to government audits at any time. The announced intention of these laws is to prevent dangerous speech instruments from falling into the hands of criminals and madmen.
In 1976 the National Writers’ Association, previously a rather somnolent social club best known for sponsoring speed-typing contests, is taken over in a palace coup by a insurgent gang of pro-free-speech radicals. They display an unexpected flair for grass-roots organization, and within five years have developed a significant lobbying arm in Washington D.C. They begin pushing back against speech-instrument restrictions.
But the speech-control movement seems to be winning most of the battles. In 1986 ownership of automatic so-called “class 3″ press equipment is banned except for federally-licensed individuals and corporations. The media is flooded with academic studies purporting to show that illicit speech instruments cause crime and violence, though for some reason the researchers making these claims often refuse to publish their primary data sets.
In unguarded moments and friendly company the speech-control movement’s leadership describes its goal expansively as confiscation and bans on all speech instruments not under direct government control or licensing. For public consumption, however, they speak only of “common-sense regulation” – conveniently never quite achieved, and always requiring more restrictions designed to increase the costs and legal risks for individuals owning speech instruments.
Free-speech advocates begin referring to the speech-control movement’s tactics as “salami-slicing” – carving away rights one “reasonable” slice at a time until there is nothing left. Document leaks from major speech-control lobbying organizations confirm that this is their strategy, and that they intend to continue lying about their objectives in public until the goal is so nearly achieved that admitting the truth will no longer prevent final victory.
But much of the general public, the American moderate middle, takes the speech-control movement’s public rhetoric at face value. Who can be against “reasonable restrictions” and “common-sense regulation”? Especially when pundits assure them that free speech was never intended by the framers of the Constitution to be interpreted as an individual right, but as a collective right of the people to be exercised only as members of government-controlled or sponsored corporate bodies.
But by 1990 many individual private owners of telephones and computers, though themselves still almost untouched by the new laws, are nevertheless becoming suspicious of the speech-control movement and increasingly frustrated with the NWA’s sluggish and inadequate counters to it. Awareness of the pattern of salami-slicing and strategic deception by the other side is spreading well beyond hard-core free-speech activists.
In 2001, an eminent historian named Prettyisland publishes a book entitled “Printing America”. In it, he argues that pre-Civil war Americans never placed the high value on free speech and freedom of expression asserted in popular history, and that ownership of speech instruments was actually rare in the Revolutionary period. He is awarded a Bancroft Prize; his book receives glowing reviews in academia and all media outlets and is taken up as a major propaganda cudgel by the speech-control movement.
Within 18 months dedicated free-speech activists led by an amateur scholar show that “Printing America” was a wholesale fraud. The probate records Prettyisland claims to have examined never existed. He has systematically misquoted and distorted his sources. Shamefaced academics recant their support; his Bancroft Prize is revoked.
The speech-control movement takes a major loss in its credibility, and free speech activists a corresponding gain. Free-speech advocacy organizations more willing to confront their enemies than the NWA arise, and find increasing grassroots support – Printer Owners of America, Advocates for the First Amendment, Jews for the Preservation of Computer Ownership.
The members of these organizations know that many people advocating “reasonable restrictions” and advocating “common-sense regulation” are not actually seeking total bans and confiscation. They’re honest dupes, believing ridiculous collective-rights theories because that’s what all the eminent people who gave Prettyisland’s book glowing reviews told them was true. They honestly believe that anyone who doesn’t support “common-sense regulation” is a dangerous, out-of-touch radical.
Free-speech advocates also know that some people speaking the same moderate-sounding language – including most of the leadership of the speech-control movement – are lying, and are using the people in the first group as cat’s paws for an agenda that can only honestly be described as the totalitarian suppression of free speech.
Increasingly, the difference between these groups becomes irrelevant. What has happened is that four decades of strategic deception by the leadership of the speech-control movement has destroyed the credibility of the honest middle. Free-speech activists, unable to read minds, have to assume defensively that everyone using the moderate-middle language of “common-sense regulation” is lying to hide a creeping totalitarian agenda.
The moderate middle, unaware of how it has been used, doesn’t get any of this. All they hear is the yelling. They don’t understand why the free-speech activists react to their reasonable language with hatred and loathing.
The preceding was a work of fiction. But I’d only have to change a dozen or so nouns and names and phrases to make it all true (some of the dates might be off a little). I bet you can break the code, and if you are “moderate” you may find it explains a few things. Have fun!
April 13, 2013
Thanks again to those of you who hit the tip jar
This is a postscript to my saga of the graphics-card disaster.
Thank you. everybody who occasionally drops money in my PayPal account. In the past it has bought test hardware for GPSD. This week I had enough in it to pay for the Radeon card, the one that actually works.
Your donations help me maintain software that serves a billion people every day. Thank you again.
April 12, 2013
The Agony, the Ectasy, the Dual Monitors
I am composing this blog entry on the right-hand screen of a brand shiny new dual-monitor rig. That took me the best part of a week to get working. I am going to describe what I went through to get here because I think it contains some useful tips and cautions for the unwary.
I started thinking seriously about upgrading to dual monitors when A&D regular Hedgemage turned me on to the i3 tiling window manager. The thing I like about tiling window managers is that your screen is nearly all working surface; this makes me a bit unlike many of their advocates, who seem more focused on interfaces that are entirely keyboard-driven and allow one to unplug one’s mouse. The thing I like about i3 is that it seems to be the best in class for UI polish and documentation. And one of the things told me was that i3 does multi-monitor very well.
So, Monday morning I went out and bought a twin of the Auria EQ276W flatscreen I already have. I like this display a lot; it’s bright, crisp. and high-contrast. HedgeMage had recommended a particular Radeon-7750-based card available from Newegg under the ungainly designation “VGA HIS|H775FS2G”, but I didn’t want to wait the two days for shipping so I asked the tech at my friendly local computer shop to recommend something. After googling for Linux compatibility I bought an nVidia GeForce GT640.
That was my first mistake. And my most severe. I’m going to explain how I screwed up so you won’t make the same error.
For years I’ve been listening to lots of people sing hosannahs about how much better the nVidia proprietary blobs are than their open-source competition – enough better that you shouldn’t really mind that they’re closed-source and taint your kernel. And so much easier to configure because of the nvidia-settings tool, and generally shiny.
So when the tech pushed an nVidia card at me and I had googled to find reports of Linux people using it, I thought “OK, how bad can it be?”. He didn’t have any ATI dual-head cards. I wanted instant gratification. I didn’t listen to the well-honed instincts that said “closed source – do not trust”, in part because I like to think of myself as a reasonable guy rather than an ideologue and closed-source graphics drivers are low on my harm scale. I took it.
Then I went home and descended into hell.
I’m still not certain I understand all the causal relationships among the symptoms I saw during the next three days. There is post and comments on G+ about these events; I won’t rehash them all here, but do look at the picture.
That bar-chart-like crud on the left-hand flatscreen? For a day and a half I thought it was the result of some sort of configuration error, a mode mismatch or something. It had appeared right after I installed the GT640. I mean immediately on first powerup.
Then, after giving up in the GT640, because nothing I could do would make it do anything with the second head but echo the first, I dropped my single-head card back in. And saw the same garbage.
From the timing, the least hypothesis is that the first time the GT640 powered up, it somehow trashed my left-hand flatscreen. How, I don’t know – overvoltage on some critical pin, maybe? Everything else, including my complete inability to get the setup to enter any dual-head mode over the next 36 hours no matter how ingeniously I poked at it with xrandr, follows logically. I should have smelled a bigger rat when I noticed that xrandr wasn’t reporting a 2650×1440 mode for one of the displays – I think after the left one got trashed it was reporting invalid EDID data.
But I kept assuming I was seeing a software-level problem that, given sufficient ingenuity, I could configure my way out of. Until I dropped back to my single-head card and still saw the garbage.
Should I also mention that the much-vaunted nvidia-settings utility was completely useless? It thought I wasn’t running the nVidia drivers and refused to do a damn thing. It has since been suggested that I wasn’t in fact running the nVidia drivers, but if that’s so it’s because nVidias own installation package didn’t push nouveau (the open-source driver) properly out of the way. Either way, nVidia FAIL.
So, I ordered the Radeon card off Newegg (paying $20 for next-day shipping), got my monitor exchanged, got a refund on the never-to-be-sufficiently-damned GT640, and waited.
The combination of an unfried monitor and a graphics card that isn’t an insidiously destructive hell-bitch worked much better. But it still took a little hackery to get things really working. The major problem was that the combined pixel size of the two 2560×1440 displays won’t fit in X’s default 2560×2560 virtual screen size; this configuration needs a 2650*2×1440*2 = 5120×1440 virtual screen.
OK, so three questions immediately occur. First, if X’s default virtual screen is going to be larger than 2560×1440, why is it not 2x that size already? It’s not like 2560×1440 displays are rare creatures any more.
Second, why doesn’t xrandr just set the virtual-screen size larger itself when it needs to? It’s not like computing a bounding box for the layout is actually difficult.
Second, if there’s some bizarre but valid reason for xrandr not to do this, why doesn’t it have an option to let you force the virtual-screen size?
But no. You have to edit your xorg.conf, or create a custom one, to up that size to the required value. Here’s what I ended up with:
# Config file for snark using a VGA HIS|H775FS2G and two Auria EQ276W
# displays.
#
# Unless the virtual screen size is increased, X cannot map both
# monitors onto screen 0.
#
# The card is dual-head.
# DFP1 goes out the card's DVI jack, DFP2 out the HDMI jack.
#
Section "Screen"
Identifier "Screen0"
Device "Card0"
SubSection "Display"
Virtual 5120 1440
EndSubSection
EndSection
Section "Monitor"
Identifier "Monitor0"
EndSection
Section "Monitor"
Identifier "Monitor1"
Option "RightOf" "Monitor0"
EndSection
Section "Device"
Identifier "Card0"
Option "Monitor-DFP2" "Monitor0"
Option "Monitor-DFP1" "Monitor1"
EndSection
That finally got things working the way I want them.
What are our lessons for today, class?
Here’s the big one: I will never again install an nVidia card unless forced at gunpoint, and if that happens I will find a way to make my assailant eat the fucking gun afterwards. I had lots better uses for 3.5 days than tearing my hair out over this.
When your instincts tell you not to trust closed source, pay attention. Even if it means you don’t get instant gratification.
While X is 10,000% percent more autoconfiguring than it used to be, it still has embarrassing gaps. The requirement that I manually adjust the virtual-screen size was stupid.
UPDATE: My friend Paula Matuszek rightly comments: “You missed a lesson: When you have a problem in a complex system, the first thing to do is check each component individually, in isolation from as much else as possible. Yes, even if they were working before.”
Now I must get back to doing real work.
April 11, 2013
National styles in hacking
Last night, in an IRC conversation with one of my regulars, we were discussing a project we’re both users of and I’m thinking about contributing to, and I found myself saying of the project lead “And he’s German. You know what that means?” In fact, my regular understood instantly, and this deflected us into a discussion of how national culture visibly affects hackers’ collaborative styles. We found that our observations matched quite closely.
Presented for your amusement: Three stereotypical hackers from three different countries, described relative to the American baseline.
The German: Methodical, good at details, prone to over-engineering things, careful about tests. Territorial: as a project lead, can gets mightily offended if you propose to mess with his orderly orderliness. Good at planned architecture too, but doesn’t deal with novelty well and is easily disoriented by rapidly changing requirements. Rude when cornered. Often wants to run things; just as often it’s unwise to let him.
The Indian: Eager, cooperative, polite, verbally fluent, quick on the uptake, very willing to adopt new methods, excessively deferential to anyone perceived as an authority figure. Hard-working, but unwilling to push boundaries in code or elsewhere; often lacks the courage to think architecturally. Even very senior and capable Indian hackers can often seem like juniors because they’re constantly approval-seeking.
The Russian: A morose, wizardly loner. Capable of pulling amazing feats of algorithmic complexity and how-did-he-spot that debugging out of nowhere. Mathematically literate. Uncommunicative and relatively poor at cooperating with others, but more from obliviousness than obnoxiousness. Has recent war stories about using equipment that has been obsolete in the West for decades.
Like most stereotypes, these should neither be taken too literally nor dismissed out of hand. It’s not difficult to spot connections to other aspects of the relevant national cultures.
A curious and interesting thing is that we were unable to identify any other national styles. Hackers from other Anglophone countries seem indistinguishable from Americans except by their typing accents. There doesn’t seem to be a characteristic French or Spanish or Italian style, or possibly it’s just that we don’t have a large enough sample to notice the patterns. From almost anywhere else outside Western Europe we certainly don’t.
Can anyone add another portrait to this gallery? It would be particularly interesting to me to find out what stereotypes hackers from other countries have about Americans.
April 9, 2013
What if it really was like that?
If you read any amount of history, you will discover that people of various times and places have matter-of-factly believed things that today we find incredible (in the original sense of “not credible”). I have found, however, that one of the most interesting questions one can ask is “What if it really was like that?”
That is, what if our ancestors weren’t entirely lying or fantasizing when they believed in…say…the existence of vampires? If you’re willing to ask this question with an open mind, you might discover that there is a rare genetic defect called “erythropoietic porphyrinuria” that can mimic some of the classical stigmata of vampirism. Victims’ gums may be drawn back on the teeth, making said teeth appear fanglike; they are likely to be photophobic, shunning bright light; and, being anemic, they may develop a craving for blood…
I think the book that taught me to ask “What if it really was like that?” systematically might have been Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind. Jaynes observed that Bronze Age literary sources take for granted the routine presence of god-voices in peoples’ heads. Instead of dismissing this as fantasy, he developed a theory that until around 1000BC it really was like that – humans had a bicameral consciousness in which one chamber or operating subsystem, programmed by culture, manifested to the other as the voice of God or some dominant authority figure (“my ka is the ka of the king”). Jaynes’s ideas were long dismissed as brilliant but speculative and untestable; however, some of his predictions are now being borne out by neuroimaging techniques not available when he was writing.
A recent coment on this blog pointed out that many cultures – including our own until around the time of the Industrial Revolution – constructed many of their customs around the belief that women are nigh-uncontrollably lustful creatures whose sexuality has to be restrained by strict social controls and even the amputation of the clitoris (still routine in large parts of the Islamic world). Of course today our reflex is to dismiss this as pure fantasy with no other function than keeping half the human species in perpetual subjection. But some years ago I found myself asking “What if it really was like that?”
Let’s be explicit about the underlying assumptions here and their consequences. It used to be believed (and still is over much of the planet) that a woman in her fertile period left alone with any remotely presentable man not a close relative would probably (as my commenter put it) be banging him like a barn door in five minutes. Thus, as one conseqence, the extremely high value traditionally placed on physical evidence of virginity at time of marriage.
Could it really have been like that? Could it still be like that in the Islamic world and elsewhere today? One reason I think this question demands some attention is that the costs of the customs required to restrain female sexuality under this model are quite high on many levels. At minimum you have to prevent sex mixing, which is not merely unpleasant for both men and women but requires everybody to invest lots of effort in the system of control (wives and daughters cannot travel or in extreme cases even go outside without male escort, homes have to be built with zenanahs). At the extreme you find yourself mutilating the genitalia of your own daughters as they scream under the knife.
I don’t think customs that expensive can stay in force without solid reason. And it’s not sufficient to fall back on feminist cant and say the men are doing it to oppress the women, as if desire to oppress were a primary motive that doesn’t require explanation. For one thing, in such cultures women (especially older women out of their fertile period) are always key figures in the control system. It couldn’t function without them being ready to take a hard line against sexual “impurity” – often, a harder line than men do.
And, in fact, a large body of historical evidence suggests that it is possible to train most women to be uncontrollably lustful with strange men. All you have to do is limit their sexual opportunities enough, as in a system of purdah or strict gender segregation that almost totally prevents close contact with males other than close relatives.
What I’m suggesting is that the they’ll-fling-themselves-at-any-male model of female behavior believed by strict patriarchal societies is actually a self-fulfilling prophecy – that is, if your society begins to evolve towards purdah, women (who have only a limited fertile period) adapt by becoming more sexually aggressive. This in turn motivates stricter customs.
The effect is a vicious circle. At the extreme, the societies in which everyone expects women to bang strangers on five minutes’ notice find they elicit exactly that behavior with the methods they employ to suppress it. Well, except for clitoridectomy; that probably works, being your last resort when you’ve noticed that social repression is making your fertile women ever more uncontrollable when they can get at men.
We can find some support for this theory even in present time. I’ve noted before that in our modern, liberated era women seem not to be demanding as high a clearing price for sex as they should. In traditional terms, they’re being lustful. And this is in a culture that probably encourages sex mixing as much or more than any in history, driving the opportunity cost associated with not randomly humping strangers to an unprecedented low.
I’m not writing to suggest any particular thing we should do about this. What I’m encouraging is a variant of the exercise I’ve previously called “killing the Buddha”. Sometimes the consequences of supposing that our ancestors reported their experience of the world faithfully, and that their customs were rational adaptations to that experience, lead us to conclusions we find preposterous or uncomfortable. I think that the more uncomfortable we get, the more important it becomes to ask ourselves “What if it really was like that?”
April 7, 2013
Out on the tiles
I’ve been experimenting with tiling window managers recently. I tried out awesome and xmonad, and read documentation on several others including dwm and wmii. The prompt cause is that I’ve been doing a lot of surgery on large repositories recently, and when you get up to 50K commits that’s enough to create serious memory pressure on my 4G of core (don’t laugh, I tend to drive my old hardware until the bolts fall out). A smaller, lighter window manager can actually make a difference in performance.
More generally, I think the people advocating these have some good UI arguments – OK, maybe only when addressing hard-core hackers, but hey we’re users too. Ditching the overhead of frobbing window sizes and decorations in favor of getting actual work done is a kind of austerity I can get behind. My normal work layout consisted of just three big windows that nearly filled the screen anyway – terminal, Emacs and browser. Why not cut out the surrounding cruft?
I wasn’t able to settle on a tiling wm that really satisfied, though, until my friend HedgeMage pointed me at i3. After a day or so of using it I suspect I’ll be sticking with it. The differences from other tiling wms are not major but it seems just enough better designed and documented to cross a threshold for me, from interesting novelty to useful tool. Along with this change I’m ditching Chatzilla for irsii; my biggest configuration challenge in the new setup, actually, was teaching irssi how to use libnotify so I get visible IRC activity cues even when irsii itself is hidden.
One side effect of i3 is that I think it increases the expected utiliity of a multi-monitor configuration enough to actually make me shell out for a dual-head card and another flatscreen – the documentation suggests (and HedgeMage confirms) that i3 workspace-to-display mapping works naturally and well. The auxiliary screen will be all browser, all the time, leaving the main display for editing and shell windows.
It’s not quite a perfect fit. The i3 model of new-window layout is based on either horizontally or vertically splitting parent windows into equal parts. While this produces visually elegant layouts, for some applications I’d like it to try harder to split space so that the new application gets its preferred size rather than half the parent. In particular I want my terminal emulators and Emacs windows to be exactly 80 columns unless I explicitly resize them. I’ve proposed some rules for this on the i3 development list and may try to implement them in the i3 codebase.
I’m not quite used to the look yet. On the one hand, seeing almost all graphics banished from my screen in favor of fixed-width text still seems weirdly retro, almost as though it were a reversion to the green screens of my youth. On the other hand, we sure didn’t have graphical browsers in another window then. And the effect of the whole is … clean, is the best way I can put it. Elegant. Uncluttered. I like that.
Even old Unix hands like me take the Windows-Icons-Mouse-Pointer style of interface for granted nowadays, but i3 does fine without the I in WIMP. This makes me wonder how much of the rest of the WIMPiness of our interfaces is a mistake, an overelaboration, a local peak in design space rather than a global one.
I was willing enough to defend the CLI for expert users in The Art of Unix Programming, and I’ve put my practice where my theory is in designing tools like reposurgeon. Now I wonder if I should have been still more of an – um – iconoclast.
April 4, 2013
No, GPSD is not the battery-killer on your Android!
Today, while doing research to answer some bug mail, I learned that all versions of Android since 4.0 (Ice Cream Sandwich) have used gpsd to read the take from the onboard GPS. Sadly, gpsd is getting blamed in some quarters for excessive battery drain. But it’s not gpsd’s fault! Here is what’s actually going on.
Activating the onboard GPS in your phone eats power. Normally, Android economizes by computing a rough location from signal-tower strengths, information it gathers anyway in normal operation. To get a more precise fix, an app (such as Google Maps) can request “Detailed Location”. This is what is happening when the GPS icon appears on your status bar.
Requesting “Detailed Location” wakes up gpsd, causing it to power up the on-board GPS and begin interpreting the NMEA data stream it ships. Somewhere in Android’s Java code (I don’t know the details), the reports from gpsd are captured and made available to the Java API that apps can see. Normally this mode is a little expensive, mainly because of the power cost of running the GPS hardware; this is why Android doesn’t keep the GPS powered up all the time. Normally the gpsd demon itself is very economical; we’ve measured its processor utilization on low-power ARM chips and it’s below the noise floor of the process monitor. As it should be; the data rate from a GPS isn’t very high, there’s simply no reason for gpsd to spend a lot of cycles.
Nevertheless, instances of excessive battery drain have been reported with the system monitor fingering gpsd as the culprit, especially on the Samsung Galaxy SIII. In some cases this happens when the onboard GPS is powered off. In every case I’ve found through Googling for “Android gpsd”, the actual bad guy is an app that is both requesting Detailed Location and running in background; if you deinstall the app, the battery drain goes away. (On the Galaxy SIII, the ‘app’ may actually be the “Remote Location Service” in the vendor firmware; you can’t remove it, but you can disable it through Settings.)
I suspect that there’s something else going on here. The fact that gpsd is reported to be processor-hogging when the GPS is powered off suggests that it’s spinning on its main select(2) call. We’ve occasionally seen behavior like this before, and it has always been down to some bug or misconfiguration in the Linux kernel’s serial I/O layer (gpsd exercises that layer in some unusual ways). This is consistent with the relative rareness of the bug; likely it’s only happening on a couple of specific phone models. If every background app using the GPS caused this problem, I’d have had a mob of pitchfork-wielding peasants at my castle door long since…
TL;DR: It’s not gpsd’s fault – find the buggy app and remove it.
All this having been said, why, yes I do think it’s seriously cool that gpsd is running in all newer Android phones. My code is ubiquitous and inescapable, bwahahahaha! But you knew that already.
April 2, 2013
Natural rights and wrongs?
One of my commenters recently speculated in an accusing tone that I might be a natural-rights libertarian. He was wrong, but explaining why is a good excuse for writing an essay I’ve been tooling up to do for a long time. For those of you who aren’t libertarians, this is not a parochial internal dispute – in fact, it cuts straight to the heart of some long-standing controversies about consequentialism versus deontic ethics. And if you don’t know what those terms mean, you’ll have a pretty good idea by the time you’re done reading.
There are two philosophical camps in modern libertarianism. What distinguishes them is how they ground the central axiom of libertarianism, the so-called “Non-Aggression Principle” or NAP. One of several equivalent formulations of NAP is: “Initiation of force is always wrong.” I’m not going to attempt to explain that axiom here or discuss various disputes over the NAP’s application; for this discussion it’s enough to note that libertarians take the NAP as a given unanimously enough to make it definitional. What separates the two camps I’m going to talk about is how they justify the NAP.
“Natural Rights” libertarians ground the NAP in some a priori belief about religion or natural law from which they believe they can derive it. Often they consider the “inalienable rights” language in the U.S.’s Declaration of Independence, abstractly connected to the clockmaker-God of the Deists, a model for their thinking.
“Utilitarians” justify the NAP by its consequences, usually the prevention of avoidable harm and pain and (at the extreme) megadeaths. Their starting position is at bottom the same as Sam Harris’s in The Moral Landscape; ethics exists to guide us to places in the moral landscape where total suffering is minimized, and ethical principles are justified post facto by their success at doing so. Their claim is that NAP is the greatest minimizer.
The philosophically literate will recognize this as a modern and specialized version of the dispute between deontic ethics and consequentialism. If you know the history of that one, you’ll be expecting all the accusations that fly back and forth. The utilitarians slap at the natural-rights people for handwaving and making circular arguments that ultimately reduce to “I believe it because $AUTHORITY told me so” or “I believe it because ya gotta believe in something“. The natural-rights people slap back by acidulously pointing out that their opponents are easy prey for utility monsters, or should (according to their own principles) be willing to sacrifice a single innocent child to bring about their perfected world.
My position is that both sides of this debate are badly screwed up, in different ways. Basically, all the accusations they’re flinging at each other are correct and (within the terms of their traditional debates and assumptions) unanswerable. We can get somewhere better, though, by using their objections to repair each other. Here’s what I think each side has to give up…
The natural-rightsers have to give up their hunger for a-priori moral certainty. There’s just no bottom to to that; it’s contingency all the way down. The utilitarians are right that every act is an ethical experiment – you don’t know “right” or “wrong” until the results come in, and sometimes the experiment takes a very long time to run. The parallel with epistemology, in which all non-consequentialist theories of truth collapse into vacuity or circularity, is exact.
The utilitarians, on the other hand, have to give up on their situationalism and their rejection of immutable rules as voodoo or hokum. What they’re missing is how the effects of payoff asymmetry, forecasting uncertainty, and decision costs change the logic of utility calculations. When the bad outcomes of an ethical decision can be on the scale of genocide, or even the torturing to death of a single innocent child, it is proper and necessary to have absolute rules to prevent these consequences – rules that that we treat as if they were natural laws or immutable axioms or even (bletch!) God-given commandments.
Let’s take as an example the No Torturing Innocent Children To Death rule. (I choose this, of course in reference to a famous critique of Benthamite utilitarianism.) Suppose someone were to say to me “Let A be the event of torturing an innocent child to death today. Let B be the condition that the world will be a paradise of bliss tomorrow. I propose to violate the NTICTD rule by performing A in order to bring about B”.
My response would be “You cannot possibly have enough knowledge about the conditional probability P(B|A) to justify this choice.” In the presence of epistemic uncertainty, absolute rules to bound losses are rational strategy. A different way to express this is within a Kripke-style possible-futures model: the rationally-expected consequences of allowing violations of the NTICTD rule are so bad over so many possible worlds that the probability of landing in a possible future where the violation led to an actual gain in utility is negligible.
My position is that the NAP is a necessary loss-bounding rule, like the NTICTD rule. Perhaps this will become clearer if we perform a Kantian on it into “You shall not construct a society in which the initiation of force is normal.” I hold that, after the Holocaust and the Gulag, you cannot possibly have enough certainty about good results from violating this rule to justify any policy other than treating the NAP as absolute. The experiment has been run already, it is all of human history, and the bodies burned at Belsen-Bergen and buried in the Katyn Wood are our answer.
So I don’t fit neatly in either camp, nor want to. On a purely ontological level I’m a utilitarian, because being anything else is incoherent and doomed. But I respect and use natural-rights language, because when that camp objects that the goals of ethics are best met with absolute rules against certain kinds of harmful behavior they’re right. There are too many monsters in the world, of utility and every other kind, for it to be otherwise.
April 1, 2013
AGW panic ending with a whimper
This is how the AGW panic ends: not with a bang, but with a whimper.
The Economist, which (despite a recent decline) remains probably the best news magazine in the English language, now admits that (a) global average temperature has been flat for 15 years even as CO2 levels have been rising rapidly, (b) surface temperatures are at the lowest edge of the range predicted by IPCC climate models, (c) on current trends, they will soon fall clean outside and below the model predictions, (c) estimates of climate sensitivity need revising downwards, and (d) something, probably multiple things, is badly wrong with AGW climate models.
Do I get to say “I told you so!” yet?
The wheels are falling off the bandwagon. The Economist has so much prestige in the journalistic establishment that it’s going to become difficult now for the mainstream media to continue averting their eyes from the evidence. Honest AGW advocates have been the victims of a massive error cascade enlisted in aid of a vast and vicious series of political and financial scams; it’s time for them to wake up and realize they’ve been had, taken, swindled, conned, and used.
I can’t but think the record cold weather in England has got something to do with this. Only a few years ago AGW panicmongers were screaming that British children would never see another snowfall – now they’re struggling with nastier winter weather than has been seen in a century. Perhaps the big chill woke somebody at The Economist up?
And if you think I’m gloating now, wait until GAT actually falls far enough below the low end of IPCC projections that the Economist has to admit that. I plan to be unseemly and insufferable about it, oh yes I do.
March 28, 2013
What does crowdfunding replace or displace?
In How crowdfunding and the JOBS Act will shape open source companies, Fred Trotter proposes that crowdfunding a la Kickstarter and IndieGoGo is going to displace venture capitalists as the normal engine of funding for open-source tech startups, and that this development will be a tremendous enabler. Trotter paints a rosy picture of idealistic geeks enabled to do fully open-source projects because they’ll no longer feel as pressed to offer a lucrative early exit to VCs on the promise of rent capture from proprietary technology.
Some of the early evidence from crowdfunding successes does seem to point at this kind of outcome, especially near 3D printing and consumer electronics with a lot of geek buy-in. And I’d love to believe all of Trotter’s optimism. But there’s a nagging problem of scale here that makes me think the actual consequences will be more mixed and messy than he suggests.
In general, VCs don’t want to talk to you at all unless they can see a good case for ploughing in at least $2 million, and they don’t get really interested below a scale of about $15M. This is because the amount of time required for them to babysit an investment (sit on the company’s board, assist job searches, etc.) doesn’t scale down for smaller investments – small plays are just as much work for much less money. This is why there’s a second class of investors, often called “angels”, who trade early financing on the $100K order of magnitude for equity. The normal trajectory of a startup goes from friends & family money through angels up to VCs. Each successive stage in this pipeline is generally placing a larger bet and accordingly has less risk tolerance and a higher time discount than the previous; VCs, in particular, will be looking for a fast cash-out via initial public offering.
The problem is this: it’s quite rare for crowdfunding to raise money even equivalent to the low-end threshold of a VC, let alone the volume they lay down when they’re willing to bet heavily. Unless crowdfunding becomes an order of magnitude more effective than it is now (which seems to me possible but unlikely) the financing source it will displace isn’t VCs but angels.
On the face of things, this would seem to sink Trotter’s optimism – if VCs don’t see any competition for investments in their preferred range there’s no obvious reason that VC pressure for proprietary rent-collection should decrease at all. But I think there will be significant second-order effects of the kind Trotter envisions via another route. That’s because crowdfunders are unlike angels in one very important respect: they’re not buying equity. Typically they’re contributing to buy an option on a product that can’t be built without startup capital. There’s no pressure on the company to produce a return to “investors” beyond that option, and in particular nobody pushing for a fast cash-out.
What this does is improve the attractiveness of a growth path that doesn’t pass through an IPO or the VCs at all. I think what we’ll see is a lot more startups crowdfunding to angel levels of capital investment, then avoiding the next round of financing in favor of more crowdfunders and endogenous growth. But think about this: how will the VCs adapt to this change in incentives?
They’ll still want to turn their ability to nurse early-stage companies into cash, but their power to set the term of that trade will be weakened precisely to the extent that crowdfunding makes the low-and-slow, no-IPO route more attractive. In another way, though, crowdfunders make a VC’s job easier. VCs can monitor the results of crowdfunding to measure the size and estimate the stickiness of the startup’s market, then see how effectively the startup executes on its promises. (You can bet that the smarter VCs are already doing this.)
Now look at the sum of these trends. If a startup has a successful crowdfunder, its bargaining power with the VCs increases in two ways. First, it’s going to be less desperate for capital than a company that can’t run out and do another crowdfunder for the next product. Second, the VC’s uncertainty about its ability to build and sell will be reduced. These changes will both increase the startup’s ability to bargain for doing things its way and reduce the VC’s pressure for an early IPO.
At the extreme, we might end up with a new normal in which VCs compete with each other to court startups that have done successful crowdfunders (“Hey! Think about what you could
do with fifteen megabucks and call us back!”), neatly inverting the present situation in which startups have to compete for the attention of VCs. That, of course, would be a situation in which open source wins huge.
Eric S. Raymond's Blog
- Eric S. Raymond's profile
- 140 followers
