Eric S. Raymond's Blog, page 35
September 29, 2014
Shellshock, Heartbleed, and the Fallacy of False Prominence
In the wake of the Shellshock bug, I guess I need to repeat in public some things I said at the time of the Heartbleed bug.
The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.
There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.
What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.
I’m not handwaving when I say this; we have statistics from places like Coverity that do defect-rate measurements on both open-source and proprietary closed source products, we have academic research like the UMich fuzz papers, we have CVE lists for Internet-exposed programs, we have multiple lines of evidence.
Everything we know tells us that while open source’s security failures may be conspicuous its successes, though invisible, are far larger.
September 28, 2014
Commoditization, not open source, killed Sun Microsystems
The patent-troll industry is in full panic over the consequences of the Alice vs. CLS Bank decision. While reading up on the matter, I ran across the following claim by a software patent attorney:
“As Sun Microsystems proved, the quickest way to turn a $5 billion company into a $600 million company is to go open source.”
I’m not going to feed this troll traffic by linking to him, but he’s promulgating a myth that must be dispelled. Trying to go open source didn’t kill Sun; hardware commoditization killed Sun. I know this because I was at ground zero when it killed a company that was aiming to succeed Sun – and, until the dot-com bust, looked about to manage it.
It is certainly the case that the rise of Linux helped put pressure on Sun Microsystems. But the rise of Linux itself was contingent on the plunging prices of the Intel 386 family and the surrounding ecology of support chips. What these did was make it possible to build hardware approaching the capacity of Sun workstations much less expensively.
It was a classic case of technology disruption. As in most such cases, Sun blew it strategically by being unwilling to cannibalize its higher-margin products. There was an i386 port of their operating system before 1990, but it was an orphan within the company. Sun could have pushed it hard and owned the emerging i386 Unix market, slowing down Linux and possibly relegating it to niche plays for a good long time.
Sun didn’t; instead, they did what companies often try in response to these disruptions – they tried to squeeze the last dollar out of their existing designs, then retreated upmarket to where they thought commodity hardware couldn’t reach.
Enter VA Linux, briefly the darling of the tech industry – and where I was on the Board of Directors during the dotcom boom and the bust. VA aimed to be the next Sun, building powerful and inexpensive Sun-class workstations using Linux and commodity 386 hardware.
And, until the dot com bust, VA ate Sun’s lunch in the low and middle range of Sun’s market. Silicon Valley companies queued up to buy VA’s product. There was a running joke in those days that if you wanted to do a startup in the Valley the standard first two steps were (1) raise $15M on Sand Hill Road, and then (2) spend a lot of it buying kit at VA Linux. And everyone was happy until the boom busted.
Two thirds of VA’s customer list went down the tubes within a month. But that’s not what really forced VA out of the hardware business. What really did it was that VA’s hardware value proposition proved as unstable as Sun’s, and for exactly the same reason. Commoditization. By the year 2000 building a Unix box got too easy; there was no magic in the systems integration, anyone could do it.
Had VA stayed in hardware, it would have been in the exact same losing position as Sun – trying to defend a nameplate premium against disruption from below.
So, where was open source in all this? Of course, Linux was a key part in helping VA (and the white-box PC vendors positioned to disrupt VA after 2000) exploit hardware commoditization. By the time Sun tried to open-source its own software the handwriting was already on the wall; giving up proprietary control of their OS couldn’t make their situation any worse.
If anything, OpenSolaris probably staved off the end of Sun by a couple of years by adding value to Sun’s hardware/software combination. Enough people inside Sun understood that open source was a net win to prevail in the political battle.
Note carefully here the distinction between “adding value” and “extracting secrecy rent”. Companies that sell software think they’ve added value when they can collect more secrecy rent, but customers don’t see it that way. To customers, open source adds value precisely because they are less dependent on the vendor. By open-sourcing Solaris, Sun partway closed the value-for-dollar gap with commodity Linux systems.
Open source wasn’t enough. But that doesn’t mean it wasn’t the best move. It was necessary, but not sufficient.
The correct lesson here is “the quickest way to turn a $5 billion company into a $600 million company is to be on the wrong end of a technology disruption and fail to adapt”. In truth, I don’t think anything was going to save Sun in the long term. But I do think that given a willingness to cannibalize their own business and go full-bore on 386 hardware they might have gotten another five to eight years.
September 26, 2014
Program Provability and the Rule of Technical Greed
In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.
I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.
Oh foolish, foolish child, that wots not of the Rule of Technical Greed.
Now, to be fair, the Rule of Technical Greed is a name I just made up. But the underlying pattern is a well-established one from the earliest beginnings of computing.
In the beginning there was assembler. And programming was hard. The semantic gap between how humans think about problems and what we knew how to tell computers to do was vast; our ability to manage complexity was deficient. And in the gap software defects did flourish, multiplying in direct proportion to the size of the programs we wrote.
And the lives of programmers were hard, and the case of their end-users miserable; for, strive as the programmers might, perfection was achieved only in toy programs while in real-world systems the defect rate was nigh-intolerable. And there was much wailing and gnashing of teeth.
Then, lo, there appeared the designers and advocates of higher-level languages. And they said: “With these tools we bring you, the semantic gap will lessen, and your ability to write systems of demonstrable correctness will increase. Truly, if we apply this discipline properly to our present programming challenges, shall we achieve the Nirvana of defect rates tending asymptotically towards zero!”
Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And compilers were adopted, and for a brief while it seemed that peace and harmony would reign.
But it was not to be. For instead of applying compilers only to the scale of software engineering that had been accustomed in the days of hand-coded assembler, programmers were made to use these tools to design and implement ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.
Then, lo, there appeared the advocates of structured programming. And they said: “There is a better way. With some modification of our languages and trained discipline exerted in the use of them, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”
Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported structured programming and its discipline came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, sweet birdsong beneath rainbows, etc.
But it was not to be. For instead of applying structured programming only to the scale of software engineering that had been accustomed in the days when poorly-organized spaghetti code was the state of the art, programmers were made to use these tools to design ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.
Then, lo, there appeared the advocates of systematic software modularity. And they said: “There is a better way. By systematic separation of concerns and information hiding, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”
Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported modularity came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, the lion lie down with the lamb, technical people and marketeers actually get along, etc.
But it was not to be. For instead of applying systematic modularity and information hiding only to the scale of software engineering that had been accustomed in the days of single huge code blobs, programmers were made to use these tools to design ever more complex modularized systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though now greatly improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.
Are we beginning to see a pattern here? I mean, I could almost write a text macro that would generate the next couple of iterations. What’s at work here is a general pattern: every narrowing of the semantic gap, every advance in our ability to manage software complexity, every improvement in automated verification, is sold to us as a way to push down defect rates. But how each tool actually gets used is to scale up the complexity of design and implementation to the bleeding edge of tolerable defect rates.
This is what I call the Rule of Technical Greed: As our ability to manage software complexity increases, ambition expands so that defect rates and expected levels of technical debt are constant.
The application of this rule to automated verification and proofs of correctness is clear. I have little doubt these will be valuable tools in the relatively near future; I follow developments there with some interest and look forward to using them myself.
But anyone who says “This time it’ll be different!” earns a hearty horse-laugh. Been there, done that, still have the T-shirts. The semantic gap is a stubborn thing; until we become as gods and can will perfect software into existence as an extension of our thoughts, somebody’s still going to have to grovel through the protocol dumps. Design for debuggability will never be waste of effort, because otherwise, even if we believe our tools are perfect, proceeding from ideal specification to flawless implementation…how else will an actual human being actually know?
Halfway up the mountain
Last night, my wife Cathy and I passed our level 5 test in kuntao. That’s a halfway point to level 10, which is the first “guro” level, roughly equivalent to black belt in a Japanese or Korean art. Ranks aren’t the big deal in kuntao that they are in most Americanized martial arts, but this is still a good point to pause for reflection.
Kuntao is, for those of you new here or who haven’t been paying attention, the martial art my wife and I have been training in for two years this month. It’s a fusion of traditional wing chun kung fu (which is officially now Southern Shaolin, though I retain some doubts about the historical links even after the Shaolin Abbot’s pronouncement) with Phillipine kali and some elements of Renaissance Spanish sword arts.
It’s a demanding style. Only a moderate workout physically, but the techniques require a high level of precision and concentration. Sifu Yeager has some trouble keeping students because of this, but those of us who have hung in there are learning techniques more commercial schools have given up on trying to teach. The knife work alone is more of a toolkit than some other entire styles provide.
Sifu made a bit of a public speech after the test about my having to work to overcome unusual difficulties die to my cerebral palsy. I understand what he was telling the other students and prospective students: if Eric can be good at this and rise to a high skill level you can too, and you should be ashamed if you don’t. He expressed some scorn for former students who quit because the training was too hard, and I said, loudly enough to be heard: “Sifu, I’d be gone if it were too easy.”
It’s true, the challenge level suits me a lot better than strip-mall karate ever could. Why train in a martial art at all if you’re not going to test your limits and break past them? That struggle is as much of the meaning of martial arts as the combat techniques are, and more.
Sifu called me “a fighter”. It’s true, and I free-sparred with some of the senior students testing last night and enjoyed the hell out of every second, and didn’t do half-badly either. But the real fight is always the one for self-mastery, awareness, and control; perfection in the moment, and calm at the heart of furious action. Victory in the outer struggle proceeds from victory in the inner one.
These are no longer strange ideas to Americans after a half-century of Asian martial arts seeping gradually into our folk culture. But they bear repeating nevertheless, lest we forget that the inward way of the warrior is more than a trope for cheesy movies. That cliche functions because there is a powerful truth behind it. It’s a truth I’m reminded of every class, and the reason I keep going back.
Though…I might keep going back for the effect on Cathy. She is thriving in this art in a way she hasn’t under any of the others we’ve studied together. She’s more fit and muscular than she’s ever been in her life – I can feel it when I hold her, and she complains good-naturedly that the new muscle mass is making her clothes fit badly. There are much worse problems for a woman over fifty to have, and we both know that the training is a significant part of the reason people tend to underestimate her age by a helluvalot.
Sifu calls her “the Assassin”. I’m “the Mighty Oak”. Well, it fits; I lack physical flexibility and agility, but I also shrug off hits that would stagger most other people and I punch like a jackhammer when I need to. The contrast between my agile, fluid, fast-on-the-uptake mental style and my physical predisposition to fight like a monster slugger amuses me more than a little. Both are themselves surprising in a man over fifty. The training, I think, is helping me not to slow down.
I have lots of other good reasons that I expect to be training in a martial art until I die, but a sufficient one is this: staying active and challenged, on both physical and mental levels, seems to stave off the degenerative effects of aging as well as anything else humans know how to do. Even though I’m biologically rather younger than my calendar age (thank you, good genes!), I am reaching the span of years at which physical and mental senescence is something I have to be concerned about even though I can’t yet detect any signs of either. And most other forms of exercise bore the shit out of me.
So: another five levels to Guro. Two, perhaps two and half years. The journey doesn’t end there, of course; there are more master levels in kali. The kuntao training doesn’t take us all the way up the traditional-wing-chun skill ladder; I’ll probably do that. Much of the point will be that the skills are fun and valuable in themselves. Part of the point will be having a destination, rather than stopping and waiting to die. Anti-senescence strategy.
It’s of a piece with the fact that I try to learn at least one major technical skill every year, and am shipping software releases almost every week (new project yesterday!) at an age when a lot of engineers would be resting on their laurels. It’s not just that I love my work, it’s that I believe ossifying is a long step towards death and – lacking the biological invincibility of youth – I feel I have to actively seek out ways to keep my brain limber.
My other recreational choices are conditioned by this as well. Strategy gaming is great for it – new games requiring new thought patterns coming out every month. New mountains to climb, always.
I have a hope no previous generation could – that if I can stave off senescence long enough I’ll live to take advantage of serious life-extension technology. When I first started tracking progress in this area thirty years ago my evaluation was that I was right smack on the dividing age for this – people a few years younger than me would almost certainly live to see that, and people a few years older almost certainly would not. Today, with lots of progress and the first clinical trials of antisenescence drugs soon to begin, that still seems to me to be exactly the case.
Lots of bad luck could intervene. There could be a time-bomb in my genes – cancer, heart disease, stroke. That’s no reason not to maximize my odds. Halfway up the mountain; if I keep climbing, the reward could be much more than a few years of healthspan, it could be time to do everything.
September 25, 2014
Announcing microjson
If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.
The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.
This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.
It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.
A programmer’s guide to building parsers with microjson is included in the distribution.
September 23, 2014
Never let an invariant go untested
I’ve been blog-silent the last couple of days because I’ve been chasing down the bug I mentioned in Request for help – I need a statistician.
I have since found and fixed it. Thereby hangs a tale, and a cautionary lesson.
Going in, my guess was that the problem was in the covariance-matrix algebra used to compute the DOP (dilution-of-precision) figures from the geometry of the satellite skyview.
(I was originally going to write a longer description than that sentence – but I ruefully concluded that if that sentence was a meaningless noise to you the longer explanation would be too. All you mathematical illiterates out there can feel free to go off and have a life or something.)
My suspicion particularly fell on a function that did partial matrix inversion. Because I only need the diagonal elements of the inverted matrix, the most economical way to compute them seemed to be by minor subdeterminants rather than a whole-matrix method like Gauss-Jordan elimination. My guess was that I’d fucked that up in some fiendishly subtle way.
The one clue I had was a broken symmetry. The results of the computation should be invariant under permutations of the rows of the matrix – or, less abstractly, it shouldn’t matter which order you list the satellites in. But it did.
How did I notice this? Um. I was refactoring some code – actually, refactoring the data structure the skyview was kept in. For hysterical raisins historical reasons the azimuth/elevation and signal-strength figures for the sats had been kept in parallel integer arrays. There was a persistent bad smell about the code that managed these arrays that I thought might be cured if I morphed them into an array of structs, one struct per satellite.
Yeeup, sure enough. I flushed two minor bugs out of cover. Then I rebuilt the interface to the matrix-algebra routines. And the sats got fed to them in a different order than previously. And the regression tests broke loudly, oh shit.
There are already a couple of lessons here. First, have a freakin’ regression test. Had I not I might have sailed on in blissful ignorance that the code was broken.
Second, though “If it ain’t broke, don’t fix it” is generally good advice, it is overridden by this: If you don’t know that it’s broken, but it smells bad, trust your nose and refactor the living hell out of it. Odds are good that something will shake loose and fall on the floor.
This is the point at which I thought I needed a statistician. And I found one – but, I thought, to constrain the problem nicely before I dropped it on him, it would be a good idea to isolate out the suspicious matrix-inversion routine and write a unit test for it. Which I did. And it passed with flying colors.
While it was nice to know I had not actually screwed the pooch in that particular orifice, this left me without a clue where the actual bug was. So I started instrumenting, testing for the point in the computational pipeline where row-symmetry broke down.
Aaand I found it. It was a stupid little subscript error in the function that filled the covariance matrix from the satellite list – k in two places where i should have been. Easy mistake to make, impossible for any of the four static code checkers I use to see, and damnably difficult to spot with the Mark 1 eyeball even if you know that the bug has to be in those six lines somewhere. Particularly because the wrong code didn’t produce crazy numbers; they looked plausible, though the shape of the error volume was distorted.
Now let’s review my mistakes. There were two, a little one and a big one. The little one was making a wrong guess about the nature of the bug and thinking I needed a kind of help I didn’t. But I don’t feel bad about that one; ex ante it was still the most reasonable guess. The highest-complexity code in a computation is generally the most plausible place to suspect a bug, especially when you know you don’t grok the algorithm.
The big mistake was poor test coverage. I should have written a unit test for the specialized matrix inverter when I first coded it – and I should have tested for satellite order invariance.
The general rule here is: to constrain defects as much as possible, never let an invariant go untested.
September 19, 2014
Request for help – I need a statistician
GPSD has a serious bug somewhere in its error modeling. What it effects is position-error estimates GPSD computes for GPSes that don’t compute them internally themselves and report them on the wire. The code produces plausible-looking error estimates, but they lack a symmetry property that they should have to be correct.
I need a couple of hours of help from an applied statistician who can read C and has experience using covariance-matrix methods for error estimation. Direct interest in GPS and geodesy would be a plus.
I don’t think this is a large problem, but it’s just a little beyond my competence. I probably know enough statistics and matrix algebra to understand the fix, but I don’t know enough to find it myself.
Hundreds of millions of Google Maps users might have reason to grateful to anyone who helps out here.
A Closed Future for Mathematics?
In a blog post on Computational Knowledge and the Future of Pure Mathematics Stephen Wolfram lays out a vision that is in many ways exciting and challenging. What if all of mathematics could be expressed in a common formal notation, stored in computers so it is searchable and amenable to computer-assisted discovery and proof of new theorems?
As a former mathematician who is now a programmer, it is I think inevitable that I have had similar dreams for a very long time; anyone with that common background would imagine broadly the same things. Like Dr. Wolfram, I have thought carefully not merely about the knowledge representation and UI issues in such a project, but also the difficulties in staffing and funding it. So it was with a feeling more of recognition than anything else that I received much of the essay.
To his great credit, Dr. Wolfram has done much – more than anyone else – to bring this vision towards reality. Mathematica and Wolfram Alpha are concrete steps towards it, and far from trivial ones. They show, I think, that the vision is possible and could be achieved with relatively modest funding – less than (say) the budget of a typical summer-blockbuster movie.
But there is one question that looms unanswered in Dr. Wolfram’s call to action. Let us suppose that we think we have all of the world’s mathematics formalized in a huge database of linked theorems and proof sequences, diligently being crawled by search agents and inference engines. In tribute to Wolfram Alpha, let us call this system “Omega”. How, and why, would we trust Omega?
There are at least three levels of possible error in such a system. One would be human error in entering mathematics into it (a true theorem is entered incorrectly). Another would be errors in human mathematics (a false theorem is entered correctly). A third would be errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.
Errors of the first two kinds would eventually be discovered by using inference engines to consistency-check the entire database (unless the assertions in it separate into disconnected cliques, which seems unlikely). It was already clear to me thirty years ago when I first started thinking seriously about this problem that sanity-checking would have to be run as a continuing background process responding to every new mathematical assertion entered: I am sure this requirement has not escaped Dr. Wolfram.
The possible of errors of the third kind – bugs in the inference engine(s) – is more troubling. Such bugs could mask errors of the first two kinds, lead to the generation of incorrect mathematics, and corrupt the database. So we have a difficult verification problem here; we can trust the database (eventually) if we trust the inference engines, but how do we know we can trust the inference engines?
Mathematical thinking cannot solve this problem, because the most likely kind of bug is not a bad inference algorithm but an incorrect implementation of a good one. Notice what has happened here, though; the verification problem for Omega no longer lives in the rarefied realm of pure mathematics but the more concrete province of software engineering.
As such, there are things that experience can teach us. We don’t know how to do perfect software engineering, but we do know what the best practices are. And this is the point at Dr. Wolfram’s proposal to build Omega on Mathematica and Wolfram Alpha begins to be troubling. These are amazing tools, but they’re closed source. They cannot be meaningfully audited for correctness by anyone outside Wolfram Research. Experience teaches us that this is a danger sign, a fragile single point of failure, and simply not tolerable in any project with the ambitions of Omega.
I think Dr. Wolfram is far too intelligent not to understand this, which makes his failure to address the issue the more troubling. For Omega to be trusted, the entire system will need to be transparent top to bottom. The design, the data representations, and the implementation code for its software must all be freely auditable by third-party mathematical topic experts and mathematically literate software engineers.
I would go so far as to say that any mathematician or software engineer asked to participate in this project is ethically required to insist on complete auditability and open source. Otherwise, what has the tradition of peer review and process transparency in science taught us?
I hope that Dr. Wolfram will address this issue in a future blog post. And I hope he understands that, for all his brilliance and impressive accomplishments, “Trust my secret code” will not – and cannot – be an answer that satisfies.
September 11, 2014
Review: Infinite Science Fiction One
Infinite Science Fiction One (edited by Dany G. Zuwen and Joanna Jacksonl Infinite Acacia) starts out rather oddly, with Zuwen’s introducton in which, though he says he’s not religious, he connects his love of SF with having read the Bible as a child. The leap from faith narratives to a literature that celebrates rational knowability seems jarring and a bit implausible.
That said, the selection of stories here is not bad. Higher-profile editors have done worse, sometimes in anthologies I’ve reviewed.
Janka Hobbs’s Real is a dark, affecting little tale of a future in which people who don’t want the mess and bother of real children buy robotic child surrogates, and what happens when a grifter invents a novel scam.
Tim Majors’s By The Numbers is a less successful exploration of the idea of the quantified self – a failure, really, because it contains an impossible oracle-machine in what is clearly intended to be an SF story.
Elizabeth Bannon’s Tin Soul is a sort of counterpoint to Real in which a man’s anti-robot prejudices destroy his ability to relate to his prosthetically-equipped son.
P. Anthony Ramanauskas’s Six Minutes is a prison-break story told from the point of view of a monster, an immortal mind predator who steals the bodies of humans to maintain existence. It’s well written, but diminished by the author’s failure to actually end it and dangling references to a larger setting that we are never shown. Possibly a section from a larger work in progress?
John Walters’s Matchmaker works a familiar theme – the time traveler at a crisis, forbidden to interfere or form attachments – unfortunately, to no other effect than an emotional tone painting. Competent writing does not save it from becoming maudlin and trivial.
Nick Holburn’s The Wedding is a creepy tale of a wedding disrupted by an undead spouse. Not bad on its own terms, but I question what it’s doing in an SF anthology.
Jay Wilburn’s Slow is a gripping tale of an astronaut fighting off being consumed by a symbiote that has at least temporarily saved his life. Definitely SF; not for the squeamish.
Rebecca Ann Jordan’s Gospel Of is strange and gripping. An exile with a bomb strapped to her chest, a future spin on the sacrificed year-king, and a satisfying twist in the ending.
Dan Devine’s The Silent Dead is old-school in the best way – could have been an Astounding story in the 1950s. The mass suicide of a planetary colony has horrifying implications the reader may guess before the ending…
Matthew S. Dent’s Nothing Besides Remains carries forward another old-school tradition – a robot come to sentience yearning for its lost makers. No great surprises here, but a good exploration of the theme.
William Ledbetter’s The Night With Stars is very clever, a sort of anthropological reply to Larry Niven’s classic The Magic Goes Away. What if Stone-Age humans relied on elrctromagnetic features of their environment – and then, due to a shift in the geomagnetic field, lost them? Well done.
Doug Tidwell’s Butterflies is, alas, a textbook example of what not to do in an SF story. At best it’s a trivial finger exercise about an astronaut going mad. There’s no reveal anywhere, and it contradicts the actual facts of history without explanation; no astronaut did this during Kennedy’s term.
Michaele Jordan’s Message of War is a well-executed tale of weapons that can wipe a people from history, and how they might be used. Subtly horrifying even if we are supposed to think of the wielders as the good guys.
Liam Nicolas Pezzano’s Rolling By in the Moonlight starts well, but turns out to be all imagery with no point. The author has an English degree; that figures, this piece smells of literary status envy, a disease the anthology is otherwise largely and blessedly free of.
J.B. Rockwell’s Midnight also starts well and ends badly. An AI on a terminally damaged warship struggling to get its cryopreserved crew launched to somewhere they might live again, that’s a good premise. Too bad it’s wasted on empty sentimentality about cute robots.
This anthology is only about 50% good, but the good stuff is quite original and the less good is mostly just defective SF rather than being anti-SF infected with literary status envy. On balance, better value than some higher-profile anthologies with more pretensions.
Review: Collision of Empires
Collision of Empires (Prit Buttar; Osprey Publishing) is a clear and accessible history that attempts to address a common lack in accounts of the Great War that began a century ago this year: they tend to be centered on the Western Front and the staggering meat-grinder that static trench warfare became as outmoded tactics collided with the reality of machine guns and indirect-fire artillery.
Concentration on the Western Front is understandable in the U.S. and England; the successor states of the Western Front’s victors have maintained good records, and nationals of the English-speaking countries were directly involved there. But in many ways the Eastern Front story is more interesting, especially in the first year that Buttar chooses to cover – less static, and with a sometimes bewilderingly varied cast. And, arguably, larger consequences. The war in the east eventually destroyed three empires and put Lenin’s Communists in power in Russia.
Prit Buttar does a really admirable job of illuminating the thinking of the German, Austrian, and Russian leadership in the run-up to the war – not just at the diplomatic level but in the ways that their militaries were struggling to come to grips with the implications of new technology. The extensive discussion of internecine disputes over military doctrine in the three officer corps involved is better than anything similar I’ve seen elsewhere.
Alas, the author’s gift for lucid exposition falters a bit when it comes to describing actual battles. Ted Raicer did a better job of this in 2010’s Crowns In The Gutter, supported by a lot of rather fine-grained movement maps. Without these, Buttar’s narrative tends to bog down in a confusing mess of similar unit designations and vaguely comic-operatic Russo-German names.
Still, the effort to follow it is worthwhile. Buttar is very clear on the ways that flawed leadership, confused objectives and wishful thinking on all sides engendered a war in which there could be no clear-cut victory short of the utter exhaustion and collapse of one of the alliances.
On the Eastern Front, as on the Western, soldiers fought with remarkable courage for generals and politicians who – even on the victorious side – seriously failed them.
Eric S. Raymond's Blog
- Eric S. Raymond's profile
- 140 followers
