Eric S. Raymond's Blog, page 13
February 17, 2018
System engineering for dummies
I’ve been getting a lot of suggestions about the brand new UPSide project recently. One of them nudged me into bringing a piece of implicit knowledge to the surface of my mind. Having made it conscious, I can now share it.
I’ve said before that, on the unusual occasions I get to do it, I greatly enjoy whole-systems engineering – problems where hardware and software design inform each other and the whole is situated in an economic and human-factors context that really matters.
I don’t kid myself that I’m among the best at this, not in the way that I know I’m (say) an A-list systems programmer or exceptionally good at a couple other specific things like DSLs. But one of the advantages of having been around the track a lot of times is that you see a lot of failures, and a lot of successes, and after a while your brain starts to extract patterns. You begin to know, without actually knowing that you know until a challenge elicits that knowledge.
Here is a thing I know: A lot of whole-systems design has a serious drunk-under-the-streetlamp problem in its cost and complexity estimations. Smart system engineers counter-bias against this, and I’m going to tell you at least one important way to do that.
You know the joke. A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them near his car three blocks away. The policeman asks why he is searching here, and the drunk replies, “This is where the light is”
When we’re trying to estimate costs and time-to-completion for a whole system, we have a tendency to over-focus on the costs we can easily list and pin down, as opposed to the ones that are more difficult to estimate.
In general, hardware costs – the BOM (Bill of Materials) – are easy to estimate. If your estimate is off, time is on your side; buying parts will generally be cheaper six months from now than it is now. Other things that are much harder to estimate are software development costs and the time value of rapid completion. Delay is not your ally there; software development does not necessarily get cheaper inside your planning horizon, and being later to complete is a bad thing.
The streetlight effect means, therefore, that when doing cost and development-complexity analysis for a whole system, and trying to optimize out costs, we’re going to have a strong tendency to chisel away at BOM while neglecting attempts to lower the software-dev costs. We’re likely to end up writing a procurement strategy that trades small gains in the former for large losses in the latter, simply because we’re not allocating our attention as we should.
What makes this worse is that the zero-sum conflict is not just for the the attention of the planner’s brain. The easy- and hard-to-estimate costs can affect each other. Going cheap on the hardware often increases the software-development friction and lengthens the product timeline.
In the specific case that nudged me into consciousness, when I had to choose a main controller for the UPside I reached for an SBC running Unix on board rather than an Arduino-class microcontroller that requires custom firmware all the way down. Various EE types complained that my choice is overkill. I knew they were wrong in my gut, but I had to think about it to realize why.
The smart whole-systems engineer counter-biases against the streetlight effect. One of the ways is to plan on the assumption that software development costs you have no clear idea how to estimate are likely to blow up on you horribly, and that if there are hedges you can buy against that by taking a hit somewhere else (in the BOM, or even your raw revenues) it’s probably smart to go for at least some of them.
Twenty years ago I was the first to observe that making the software of your whole system open-source is an effective way to spread your development costs and mitigate the effects of your own experts moving on to other things. That’s one kind of hedge against large risks that are difficult to estimate – you’re trading away the expected benefits of collecting rent on the software’s secrecy, but (with rare exceptions) these were doubtful to begin with. Way easy to overestimate.
Using a Unix engine instead of a no-OS microcontroller or PIC in your embedded deployment is another long bet. You’ll pay for it where you can see, but the benefits in reduced costs and risks are future indefinite.
A smart systems engineer knows that he should counterbias against the streetlight effect by making some of those long bets anyway. Sometimes this will succeed, sometimes it will fail. The only thing you know for sure is that the “safe” strategy of never long-betting at all is suboptimal, exactly because the streetlight effect messes with your judgment.
February 15, 2018
Announcing: The UPSide project
A week ago I argued that UPSes suck and need to be disrupted. The response to that post was astonishing. Apparently I tapped into a deep vein of private discontents – people who had been frustrated and pissed off with UPS gear for years or decades but never quite realized it wasn’t only their problem.
Many people expressed an active desire to contribute to a kickstarter aimed at this problem. I got one offer from someone actually willing to hire an engineer to work on it. Intelligent feature suggestions – often framed as gripes about the deficiencies of what you can buy out there – came flooding in.
Perhaps most remarkably, the outlines of a coherent design began to emerge. We identified a battery technology we could buy COTS that would improve on the performance and lifetime of lead-acid but without the explosion risk of lithium-ion. The way that safety and regulatory requirements would require a partition between low- and- high-power electronics became clearer. A feature list solidified. We took in good ideas and rejected some not-so-good ones.
Therefore, even though we don’t yet have a lead hardware engineer, I have initiated Project UPSide. There’s no code or schematics yet; we’re still developing requirements and architecture. By “architecture” I mean, for example, what specific kinds of information the hardware subsystems need to exchange.
All interested parties are welcome to browse the wiki and apply for write access. Roles we are especially looking for:
* Lead hardware engineer – needs to be able to do overall design and systems integration.
* Someone who knows how to program USB endpoints. (It will land on me to learn this if we can’t find someone with experience.)
* Someone who understands battery-state modeling. (Again, I’ll learen this if nobody steps up.)
My own job is, basically, product manager – keeper of the requirements list and recruiter of talent.
February 12, 2018
“The Lost Art of C Structure Packing” now covers Go and Rust
I have issued a new version, 1.19, of The Lost Art of C Structure Packing.
The document now covers Go and Rust as well as C, reflecting their increasing prominence as systems-programming languages competing with C and being deployed in contexts where structure-size optimizations can be of some importance.
TL;DR: C alignment and packing rules map over to Go in the most obvious way except for one quirk near zero-length structure members. Rust can be directed to act in a C-like way but by default all bets are off.
February 8, 2018
UPSes suck and need to be disrupted
Warning: this is a rant.
I use a UPS (Uninterruptible Power Supply) to protect the Great Beast of Malvern from power outages and lightning strikes. Every once in a while I have to buy a replacement UPS and am reminded of how horribly this entire product category sucks. Consumer-grade UPSes suck, SOHO UPSs suck, and I am reliably informed by my friends who run datacenters that no, you cannot ascend into a blissful upland of winnitude by shelling out for expensive “enterprise-grade” UPSes – they all suck too.
The lossage is extra annoying because designing a UPS that doesn’t suck would be neither difficult nor expensive. These are not complicated devices – they’re way simpler than, say, printers or scanners. This whole category begs to be disrupted by an open-hardware design that could be assembled cheaply in a makerspace from off-the-shelf components, an Arduino-class microcontroller, and a PROM.
How badly do UPSes suck? Let me count the ways…
I know people who hook up car batteries to salvaged UPS electronics and get 10 years of life out of the rig. UPSes could be designed with the kind of deep-cycle gel batteries used for marine applications like trolling motors to last even longer and be even more reliable. But noooooo. Buy a UPS and the vendor (even one of the relatively good ones) will sell you the crappiest, lowest-cost power cell that might, with a squint and a following tailwind, possibly achieve the dwell time printed on the packaging; it will in fact be a piece of shite with so little deep-cycle endurance that it will crap out in usually less than three years.
(This isn’t just needlessly expensive, it’s bad for the environment. Dead batteries are nasty things to put in the waste stream. Doing that as seldom as possible is real care for the ecosphere, not mere idiot virtue signaling like, say, ‘recycling’ paper and plastic.)
Yeah, and that dwell time will always be at least what Mark Twain called a “stretcher” and from less scrupulous vendors an outright lie. Vendors commonly measure it with tiny monitors and half the other peripherals slept; you’ll be lucky to get 50% of the rated value on a system in real use.
Next: automobiles nowadays are are equipped with intelligent battery-current sensors that measure not just output voltage but discharge current and battery temperature. This is enough to do accurate state-of-charge and state-of-battery calculations, so you (a) know how much dwell time to expect during an outage, and (b) get warning your battery has entered the bad end of its bathtub curve well before it craps out.
But are these intelligent current sensors deployed in UPSes? Why, no! That might add a couple of cents to the BOM and of course we can’t have that. Far better to inflict unexpected battery death on the customers who, you know, were paying money exactly so that sort of thing wouldn’t happen to them.
Now we get to what actually triggered today’s rant: the terrible user experience produced by the vendors’ grim determination to pump out least-possible-cost designs that ignore what users actually need. I was awakened from a sound sleep at oh dark thirty yesterday morning by the alarm on my UPS. Upon examining it, I was greeted by a flashing idiot light.
What’s missing from this picture? A text error-message display, like you’d see even on a rather low-end printer or scanner, to tell me “Scheduled periodic dwell test failed – your battery is dead”. To get a clue that this is what that particular alarm tone and flash pattern meant I would have had to lay hands on the device documentation, which of course you can always do instantly when awakened to deal with an emergency alarm in the middle of the fscking night.
It gets better. The tech support drones at my vendor’s call center couldn’t tell me what that alarm behavior meant either. Eventually they issued an RMA for a new battery anyway, but I couldn’t reconstruct what had actually gone down until discussing the whole sorry mess with AD&D regular John D. Bell the next day – he runs a university data center and has seen incidents like this before much more often than I have.
(It took some struggle to get the vendor to issue an RMA, because although I bought the device just three months ago a serial number lookup reveals that it was manufactured five years ago – the battery spent almost all of its service life sitting in a succession of warehouses. To be fair, this part is not the vendor’s fault – it’s the kind of risk you take when you buy from an electronics retailer that likes to stack other peoples’s overstock goods in the front of the store with an eye-catching discount on the pricetag.)
Back to what is absolutely the vendor’s fault: all but the lowest-end UPSes have monitoring ports (usually USB these days) that, in theory, should let you get useful status information from the device – perhaps even hook it up to a monitoring daemon on your computer that can do a clean shutdown when the device has been on battery for more than a transient brownout’s worth of duration.
In theory. In practice, this sort of thing is such a pain in the ass to set up that despite having contributed to two different UPS-monitoring daemons (NUT and apcsupsd) I gave up on this capability years ago. The problem divides into at least two parts:
1. The wire protocols these things speak are generally undocumented. The vendors think it’s sufficient to provide a Windows binary blob that gives you a fixed-function monitor GUI. Which is almost invariably so badly designed and poorly documented that it might look kinda purty but you can get little actual use out of it.
2. That first problem might be surmountable if you could watch the port yourself and reverse-engineer the protocol by watching what datagrams come up the wire. And if it had been designed by anyone with a clue about how to do application protocols right they’d be in some self-describing metaprotocol like JSON or XML or at least NMEA0183-like text sentences.
But no. This never happens. UPS protocols are invariably cryptic, half-assed crap designed by an EE in a hurry who thinks every “unnecessary” byte transmission is a sin against nature. Fields have no names. Numbers, if they’re not binary-encoded in unspecified endianness, have no units. Opaque status codes abound. The protocol grammar is full of defect-attracting corner cases. The device never IDs itself or provides a protocol version. Discoverability: what’s that?
If this sounds like the same sort of mess that afflicts GPS reporting protocols, that’s because it is. Actually it’s worse here, because there’s no equivalent of NMEA0183 to even begin to address the discoverability problem. Relentless vendor cheapness is at the root of both messes – given a choice between spending NRE on a decent design and shipping inadequate crap that piles hidden long-term costs on customers, UPS vendors unfailingly opt for the latter.
This whole product category begs to be disrupted by a maker design – because it’s possible, and otherwise the incentives on the vendors won’t change. Without disruption, the whole category could stay trapped in a nasty crab-bucket equilibrium that never rewards the risk of spending a few more pennies than ones’s competitors to field a decent design.
Let’s invert the above gripe list to specify what a decent design would look like:
0. Open hardware design, open-source firmware design, open-source device-monitor code.
1. Designed to be used with deep-cycle marine gel batteries that will last next to forever, for minimum long-term cost and least environmental impact.
2. Uses EV-style intelligent battery-current sensors to enable accurate projection of battery performance.
3. Has a textual alert status display in addition to alarms.
4. Has a USB monitoring port that speaks a decently-designed and fully documented wire protocol, probably JSON datagrams a la GPSD.
I could write the firmware, but I don’t have the chops to design the hardware. Anybody game?
February 4, 2018
How “open source” was coined
Yesterday was the 20th anniversary of the promulgation of the term “open source”. Three days before that, Christine Peterson published How I coined the term ‘open source’ which apparently she hd written on 2006 but been sitting on since.
This is my addition to the history; I tried to leave an earlier version as a comment on her post but it disappeared into a moderation queue and hasn’t come out.
The most important point: Chris’s report accurately matches my recollection of events and I fully endorse it. There are, however, a few points of historical interest that can be added.
First, a point of fact. Chris doesn’t remember for sure whether it was me or Todd Anderson that first brought up the need for a new umbrella term to replace “free software”.
It was me, and it was me for a reason. In those very early days I was ahead of the rest of our leadership in understanding a few critical things about our new circumstances. Others eventually did catch up, but at the time of this meeting I think I was the only person there who had fully grasped that what we needed to take our folkways mainstream we needed a full-fledged marketing and rebranding campaign.
That insight was driving my thinking, and led me directly to the conclusion that we needed a better label for the brand.
It was a little tricky for me to talk about this insight explicitly, and I mostly didn’t. Because I also knew that in the minds of the people in that room, and most other hackers, “marketing and rebranding” was a negative idea heavily tied to fakery and insincerity. I couldn’t blame them for this; I’d had to struggle with the concept myself before making some peace with it. But…I was unable to evade the historical moment, and saw what was needed.
Therefore, I had the dual challenge of trying to lead a rebranding campaign while easing hackers into comfort with the idea of rebranding. That meant not pushing too hard or too fast – choosing my interventions carefully and allowing other people to catch up with the implications in their own ways.
Chris couldn’t read my mind, so she had no way to know that I spotted “open source” as the winner we were looking for the first or second time the phrase was mentioned – well before I explicitly advocated for it myself. It seemed perfect to me – ideologically neutral, easy to parse, and with just enough connection to an already respectable term of art (that is, intelligence-community use of “open source”) to be useful.
There was also something right about the use of “open” that I couldn’t pin down exactly at the time. I knew this was an adjective with a lot of positive loading attached in the hacker culture, but I was not then clear on the exact psychology. Now I think I understand that better – it evokes the Big Five trait “openness to experience”, which we value a lot. That particular term didn’t exist yet in 1998, but the connotative web that would later give rise to it did.
In accordance with my strategy of wu wei I hung back a little and let other people there gravitate to “open source”, rather than pushing for it as hard as I could have right out of the gate. I would have fought for it over the alternatives if I’d had to, but I didn’t…we all got there and that was a far healthier outcome than if I had tried to dominate the discussion.
I’m telling this story this way because, now that Chris has admitted she was being a bit stealthy about getting the term adopted, she deserves to know that I was being a bit stealthy myself – for reasons that seemed good at the time and still do in retrospect.
The rest of the story is more public. A few weeks later what was in effect a war council of the hacker community’s chieftains, convened by Tim O’Reilly, voted to endorse the new term and in effect gave me a mandate to go out and evangelize it. Which of course I did; the rest is both metaphorically and literally history.
I think Chris is fully entitled to her happy twinge. From the perspective of twenty years later, “open source” was a smashing success, fully justifying both her and my hopes for what we could do with this rebranding.
Without Chris, I would have had to come up with something as good as “open source” myself in order to get the mission done. Maybe I would have, maybe not. I’m glad we didn’t have to roll those dice; I’m glad Chris nailed it for all of us.
Ever since I was first reminded that it was her coinage I’ve been careful to credit Chris for it. I was impressed with her for the invention, and that developed into a friendship that we both value.
February 3, 2018
The Roche motel
One of the staples of SF art is images of alien worlds with satellites or planetary twins hanging low and huge in the daylight sky. This blog post brings he trope home by simulating what the Earth’s Moon would look like if it orbited the Earth at the distance of the International Space Station.
The author correctly notes that a Moon that close would play hell with the Earth’s tides. I can’t be the only SF fan who looks at images like that and thinks “But what about Roche’s limit”…in fact I know I’m not because Instapundit linked to it with the line “Calling Mr. Roche! Mr. Roche to the white courtesy phone!”
Roche’s limit is a constraint on how close a primary and satellite can be before the satellite is actually torn apart by tidal forces. The rigid-body version, applying to planets and moons but not rubble piles like comets, is
d = 1.26 * R1 * (d1 / d2)**(1/3)
where R1 is the radius of the primary (larger) body, d1 is its density, and d2 is the secondary’s density (derivation at Wikipedia).
And, in fact, the 254-mile orbit of the ISS is well inside the Roche limit for the Earth-moon system, which is 5932.5 miles.
The question for today is: just how large can your satellite loom in the sky before either your viewpoint planet or the satellite goes kablooie? To put it more precisely, what is the maximum angle a satellite can reasonably subtend?
We need to constrain the conditions a bit more, actually. By holding the mass of the satellite constant but decreasing its density we can push up its angular diameter. It’s not easy, mind you; because volume increases with the cube root of radius, it will take roughly an eightfold fall in density to double the diameter.
But the artistically interesting cases are terrestroid rocky worlds or moons orbiting each other and it turns out their densities don’t vary by a lot. Earth’s is 5.51g/cm**3, the Moon’s is 3.43g/cm**3 and Mars’s is 3.93g/cm**3. For comparison the Sun’s density is 1.41 and Jupiter’s is 1.33.
Let’s start by setting the primary and secondary densities equal, then. This is convenient, because in the formula above we actually minimize d (and thus maximize subtended angle) when the density ratio is 1 (if it goes below 1 the primary and secondary switch roles). Our simplest case for minimizing d – two worlds of equal mass/density/radius – is actually pretty plausible.
OK, computation time. By hypothesis our worlds have the same radius R. The minimum distance from a point on the surface of the primary to a point on the surace of the secondary has to be 1.26 R. We must add 0.5R to get to the secondary’s center, because the visual angle subtended by the secondary will be that of a disk passing through the center of the secondary at right angle to the line of vision to it.
The implied triangle has one corner at the surface of the primary
world, another corner at the center of the secondary, and a third on the secondary’s limb limb subtending the maximum angle. x = 1.76R, y = 0.5R. Elementary trigonometry gives us the following formula:
a = 2 * arctan(0.5 * R / 1.76 * R) = 2 * 0.256 radians = 31.6 degrees.
That’s actually pretty dramatic – 60 times the size of a full moon. More importantly the human visual field is about 150 degrees; our loomingest possible satellite could take up a bit more than a fifth of it. Looks like those SF illos are fairly plausible after all.
Oh, and why did I title this “The Roche motel”? Because of the destructive effects when a hapless celestial body wanders inside the limit. “Satellites check in…but they don’t check out.”
(An arithmetic mistake in the original posting has been corrected; good on Edward Cree for spotting it.)
February 2, 2018
Rethinking housecat ethology
There’s a common folk model of how housecats relate to humans that says their relationships with us recruit instincts originally for maternal bonding – that is, your cat relates to you as though you’re its mother or (sometimes) its kitten that needs protecting.
I don’t think this account is entirely wrong; it is a fact that even adult cats knead humans, a behavior believed to stimulate milk production in a nursing mother cat. However, through long observation of cats closely bonded to humans I think the maternalization theory is insufficient. There’s something else going on, and I think I know what it is.
Disclaimer: I’m not a trained animal ethologist, just a careful observer who is fond of cats and has learned to speak cat kinesics pretty well. (The reality check for this is that I have a history of making friends with individual cats easily and quickly even when they’re shy around humans.)
I follow the (sparse) literature on cat genetics and ethology. And a thing that has come to light recently is that Rudyard Kipling (“The Cat Who Walks By Himself”) was wrong. Housecats and their near relatives in the wild (felix lybica and felix sylvestris) are not solitary animals. Some of the larger cats are, outside of mating; but the felids related to housecats (which still interbreed freely with their domesticated kin where the ranges overlap) tend to live in smallish groups composed of a handful of mature females and their offspring.
Being adapted for living in groups makes sense. Small cats in the wild are dangerously exposed to predation, not least by larger cats; the usual benefits of having peers on alert while you’re sleeping apply.
It follows that cats have well-developed social instincts about each other. They don’t merely tolerate each others’ presence if a human happens to be keeping more than one, they form actual peer bonds. This is easy to miss because much cat peer signalling is not obvious to humans; in particular, cats don’t have to be in eye-to-eye contact to be interacting.
I judge the maternalization theory of how cats bond to humans is inadequate for several reasons. One is that cats show marked and differing sex preferences for what kinds of humans they like to bond to, and some (like our last cat, Sugar) clearly prefer males. If maternal bonding were the entire instinct armature of their relationships to humans, this would be difficult to explain – indeed, it would be hard to see how such a preference could evolve at all.
Another is that maternalization theory doesn’t seem adequate to explain how cats often bond to every human in a household, even the ones that don’t feed them! At our house, my wife Cathy is the food-giver. Both our gone-but-not-forgotten Sugar and our current cat Zola have shown a clear grasp on this, but that knowledge never stopped either of them from behaving as though their day wasn’t complete without some quality Eric time.
All this becomes much easier to account for if the instinct ground of housecat behavior towards humans is not necessarily maternal bonding but peer bonding. In this model, I’m not Zola’s mother but a peer cat or senior tom that he trusts and wants to maintain good relations with.
I’m not certain, because the differences are very subtle, but I think “senior tom” status elicits slightly different behaviors than “mommy”/food-giver status – slightly more placatory and submissive. I noticed this more with Sugar (female) than with Zola (male), which is a little odd because one consistent thing about cat social hierarchies is that they tend to be female-dominant – all things being equal one would have expected Sugar to defer less to a near-peer male than Zola does.
On the other hand, my observations fit the fact that male cats aren’t nurturers and are actually rather dangerous to kittens – in the wild they not infrequently eat the young of rivals if they can get away with it.
It is well known that some cat breeds are more consistently human-friendly than others, and generally believed among cat fanciers that the friendliest breeds are old, “natural” landraces like Maine Coons, Norwegian Forest Cats, and Lake Vans that haven’t been show-bred for appearance traits.
I suspect that the underlying variable here is exactly propensity to form social peer bonds with other cats. It could hardly be anything specific to humans; we haven’t been part of the cats’ evolutionary story for long enough at only 10KYa or so. On the other hand, it’s easy to see why that peer-bonding tendency might have been both strong in the ancestral environment and show some tendency to decay under the artificial circumstances of living with humans.
I’ll finish up by noting that living with Maine Coons for 25 years probably did more to push my thinking in this direction than anything else could have. It is impossible not to notice how social, outgoing, and just plain nice Coons are as a breed – but if your brain works like mine you can’t stop there, you also can’t avoid noticing that cats are not little humans in fur suits and looking for an explanation of their human-compatibility that makes sense in cat evolutionary terms.
Co-option of cat peer bonding is my proposal. Alas, I have been unable to think of a way to test this theory. Maybe my readers can come up with an interesting retrodiction?
January 23, 2018
Three times is friendly action
Today, for the third time in the last year, I got email from a new SF author that went more or less like:
“Hi, I’d like to send you a copy of my first novel because [thing you wrote] really inspired me.”
All the novels so far are libertarian SF with rivets on – the good stuff. Amusingly, I don’t think any of these authors knew in advance that I’m a judge for the Prometheus awards,
It’s really gratifying that I’m making this kind of difference.
January 18, 2018
Sorry, Ansari: a praxeologist looks at the latest scandalette
This is an expanded version of a comment I left on Megan McArdle’s post
Listen to the ‘Bad Feminists’ in which she muses on the “Grace”-vs.-Aziz-Ansari scandalette and wonders why younger women report feeling so powerless and used.
It’s not complicated, Megan. You actually got most of it already, but I don’t think you quite grasp how comprehensive the trap is yet. Younger women feel powerless because they live in a dating environment where sexual license has gone from an option to a minimum bid.
I’m not speaking as a prude or moralist here, but as a…well, the technical term is ‘praxeologist’ but few people know it so I’ll settle for “micro-economist”. The leading edge of the sexual revolution give women options they didn’t have before; its completion has taken away many of the choices they used to have by trapping them in a sexual-competition race for the bottom.
“Grace” behaved as she did because she doesn’t have a realistic option to hold out for romance before sex; women who do that put themselves at high risk of not getting second dates, there are too many others willing to play by the new rules. So she has to do sex instead and hope lightning strikes.
Couple this with the fact that as women get on average more educated there are fewer hypergamically-eligible males at every SES, and you have the jaws of a vicious vise. It’s especially hard on high-status women and low-status men. The main beneficiaries are high-status men, who often behave like entitled assholes because the new rules tilt the playing field in their favor even more than the old ones did.
(That last is not aimed at Ansari, who seems to me to have behaved quite like a gentleman, acceding to every request “Grace” actually made. It’s not his fault he couldn’t read her mind.)
I don’t have a fix for this problem. As you imply, if women were able to coordinate a retreat to withholding early sex they would regain some of their lost bargaining power, but I don’t see any realistic possibility of this today. The problem is that the refuseniks from such an agreement trying to form, and the defectors after it formed, would be rewarded with more sex with high-status men, which is exactly what every player on the female side is instinctively wired to want.
I’ve noted before that, as separate issue from hypergamy, women seem to be wired to want more sex on more casual terms than is actually good for their prospects of landing a parenting partner. This makes the defection problem more difficult – it means that coordinating a change wouldn’t just be fighting instrumental rationality with too short a time horizon, but some kind of holdover from the environment of ancestral adaptation that makes women irrationally willing.
So the fix, if there were one, would have to be imposed on all women. Good luck with that; religion has lost the power to do it and there is no other institution even positioned to try. Ironically, the most vociferous opposition to such an imposition attempt would come from…feminists. And there’d be little help from high-status men, either.
This all makes some sense of the extreme repressiveness of many traditional societies, including our own until recently. The old ways had features we now find ugly and unacceptable, but maybe that was the best adaptation they could manage to a hard problem! It is unlikely we can go back…”How ya gonna keep ’em down on the farm”, and all that. But what do we do to go forward?
December 28, 2017
The blues ate rock and roll!
I’ve been diving into the history of rock music recently because, quite by chance a few weeks ago, I glimpsed an answer to a couple of odd little questions that had been occasionally been bothering me for decades.
The most obtrusive of these questions is: Why does nothing in today’s rock music sound like the Beatles?
It’s a pertinent question because the Beatles were so acclaimed as musical innovators in their time and still so hugely popular. And yet, nobody sounds like them. Since not long after the chords of the “Let It Be” died away in 1969, every attempt to revive the Beatlesy sound of bright vocal-centered ensemble pop has lacked any staying power among rock fans. It gets tried every once in a while by a succession of bands running from Badfinger to the Smithereens, and goes nowhere. Why is this?
Another, related question is: Why does so very little in today’s rock music sound like Chuck Berry?
Inventor of rock and roll, they still call him. And yet outside of occasional tributes and moments of self-conscious museumizing, nobody writes rock music that sounds anything like “Johnny B. Goode” anymore. Modern tropes and timbre are vastly different. Only the rock beat – only the drum part – survives pretty much intact.
It’s odd, when you think about it. The sound that electrified the late Fifties and Sixties is still revered, but it’s gone. The basic rock beat remains, but everything above it has been flooded out, replaced by something harder and darker.
We all sort of know, even as casual listeners, that rock has evolved a lot. There’s even a tendency for the term “rock and roll” to nowadays be specifically confined to the older sound, with “rock” standing alone to refer to the more modern stuff.
But…what happened? What made the newer sound we all take for granted? Where did it come from?
If and when you start wondering about this, YouTube is a terrific research library. You can use the search facility to hop across decades and genres. With Wikipedia to trace connections among artists this sort of musicological forensics is probably easier than it has ever been before.
I’ve been listening while I programmed, and taking occasional times out to think about what I was hearing and how it fits into a larger picture. My first clue was a quip in an article about the legendary Chicago blues guitarist Buddy Guy, who reported bring irritated when people thought he was imitating Jimi Hendrix when in fact it was rather the other way around.
Here’s what I found. The sea-change happened between 1969 and 1971. The moving figures were: Jimi Hendrix. British Invasion bands like the Rolling Stones and Led Zeppelin and the Who. American West Coast bluesmen like Mike Bloomfield and Al Cooper. The San Franciso acid-rock scene. And many lesser imitators.
What they did was raze old-school rock-and-roll to the ground, replacing it with a bastard child of LSD and Chicago-style hard electric blues. That angry, haunting, minor-key idiom is what buried the Beatles and put a stamp on rock music so final that today the sound of any modern arena rocker – like, say, Guns’n’Roses – is recognizably the same thing musicians began to record around 1970.
(Which it should be pointed out, is a very long run for a mass-market pop genre. It’s as though in 1970 our radios had still been full of pop in forms dating from 1925…)
Yesterday I listened to the first three albums by the James Gang (“Yer Album”, 1969, “The James Gang Rides Again”, 1970; “Thirds”, 1971) for the first time in probably 35 years. Why? Because when I stretched my mind back to try to remember the earliest pieces of music that would sound completely in place on a modern rock playlist and were recorded by people neither black nor British, “Funk #49” and “Walk Away” leapt to mind. Go listen, and think about how undated and modern Joe Walsh’s guitar work sounds…
You can hear the transition happening on these albums. The 1969 one sounds like a midwestern imitation of a Fillmore acid jam session – loose, spacy, a collage of half-assimilated influences from old-school rock and roll, country, blues, and psychedelia. It has clever bits but is kind of a mess. Walsh’s signature guitar tambre is there, but he’s still stumbling.
1971 is subtly different. It’s played harder; the phrasing is tighter, the dynamic range is wider, the compositions sure-footed. We’re not listening to a collage of influences any more; this is its own thing, and the mature playing style Walsh would exhibit on later solo classics like “Rocky Mountain Way” and bring to the Eagles in 1975 is established.
Stepping back a bit, the style he settles into is much, much more like hard Chicago blues than it is like like Chuck Berry or the Beatles or Buddy Holly. You can’t really pick this up by listening to modern rock, because that’s what everything sounds like (Walsh’s later fame was a contributing factor). You have to go back to pre-1970s blues, from before the transition I’m talking about, to really get it.
I’m pointing at players like Buddy Guy, Muddy Waters, Howlin’ Wolf, Elmore James…the men who took Delta blues and urbanized it and amped it up, adding electric guitars and jazz-inflenced rhythm sections. John Lee Hooker was more Delta than Chicago but seems to have had a particularly strong influence on the Chicago sound as rockers received it.
This was all going on in the decade and a half before “Yer Album”, parallel to 1950s proto-rockers and the Beatles but running pretty separate from them. It was blacker and more urban, while white proto-rockers owed more to Texas swing and country music and gospel than to blues. You can hear that in, for example, Buddy Holly or Elvis Presley.
Earlier, I had “or British” in some qualifiers. It’s pretty well-trodden ground that the British Invasion bands were made of guys who had become fascinated by blues music in the England of the early 1960s. They crossed the Atlantic bringing that enthusiasm with them. This is well known, but it’s often thought of as a minor historical point with only unspecified and vague relevance to later music.
What I’m arguing is that the ensuing victory of hard blues over pre-1969 “rock and roll” was so total that it made itself nigh-invisible. We can see it, sideways, by noticing that today everyone from before the transition sounds quaint and rootsy and – even when as listenable as the Beatles – not actually very relevant to modern rock.
Cue up any modern leather-jacketed rock hero. Then cue up Buddy Holly or a random early Beatles single. Then cue up Howlin’ Wolf. I think you’ll see what I mean – or, properly, hear it.
Eric S. Raymond's Blog
- Eric S. Raymond's profile
- 140 followers
