Joseph Hirsch's Blog - Posts Tagged "technology"

Sewing Shut the Lips of the Prophet: Ivan Bloch, Bruce Willis, and Death in the Desert

I had first heard of Ivan Bloch (also known as Jan Gotlib Bloch) from a truly memorable history professor I had as an undergrad. This professor was small, a bit of a nebbish with thick glasses and a little bit of a mean streak to go with his wit. Still, he always kept the attention of the class with the raising and lowering of his voice, his gesticulations, and his evident passion that kept even the most millennial of millennials engaged with the material.
Ivan Bloch was one of the few people who predicted that the Great War would not only be different from all previous wars, but would quickly come to a nihilistic stalemate that would forever change human perceptions of war and technology. Others, such as the German writer Ernst Junger, were convinced that technology was going to advance to such a stage that we would reenter the age of Titanism, which in brief would mean that man would challenge the sphere of the Gods, and in doing so alter his own mundane world and that of the Olympians in unforeseen ways and perhaps pay dearly for his hubris with something greater than his mere life as forfeit.
This would obviously not be a change for the better. Old ideals and beliefs upon which all martial glory had been predicated throughout history (the metaphysics of war, as Julius Evola would have it) would alter and corrode, since the tools man had created now made it impossible for man to test his courage, bravery, and even capacity for savagery (that’s been an asset throughout most of history, no matter what we think of it now). The fundamental wrongness (for lack of a better word) of using something like a predator drone to fight the enemy while still calling oneself a soldier was already a topic of discussion among military theorists and philosophers of technology (as per Ernst Kapp) during and after the Great War.
Before the Great War, though, the general consensus was that old tactics dating back to shoulder-to-shoulder pike advances and old tools (such as the horse) would carry the day. Yes, the machine gun had existed for some time in one form or another (the Germans would use the crew-serve Spandau and the English had the roughly equivalent Vickers), but a lot of commanders didn’t like these dreaded machines. The brass was convinced they could control the machinegun’s use and dissemination on the field. They thought they could convince men to keep these cheating weapons not fit for intra-European conflict behind the lines. Such savage arms were designed to be used by underwhelmed colonials to put down rebellions in places like Togoland or India. Conversely, if the guns were used, they would chop through the enemy and his defenses so quickly that the advance of troops would proceed as swiftly as a transcontinental train. The plan for the Germans, the Schliefen Strategy, was supposed to be delivered with the force of a giant’s (or a Titan’s) left hook, aimed at the fromage blanc / maquée of Belgium and then France, and then on through the wide chalk cliffs of Dover. The Germans were good students of history (at least military history) and knew from the derring-do of the Sea Dogs that he who ruled the sea ruled the world. If they could break through France, secure, mine, and patrol the English Channel, they could establish a route from the North Sea through the Channel that would give them a beachhead to run the world (this is considered an evil ambition when being pursued by any nation besides England or the United States). Germany would no longer be beholden to the whims of the Atlanticist powers, nor their embargoes, sanctions, or blockades.
The Germans knew they needed this and the Allies knew they could not allow them to do this at any cost. The result of this was the death of around 37 million people, military and civilian casualties combined, lives lost mostly over inches of ground on coal slagheaps and nameless hummocks and craters. It would not be hyperbole to say that everything humanity thought it knew about itself foundered on thick strands of barbed wire where finely attired men, their steeds, and their dreams died. Don Quixote had more success against windmills than some of the smartest military theorists had in predicting what the combat of the Great War would look like, and the political upheavals to which it would lead.
Getting back to Bloch, however, here’s what the prophet without honor actually had to say on what he imagined this impending conflict would look like: “war will become a kind of stalemate… Everybody will be entrenched. It will be a great war of entrenchment. The spade will be as indispensable to the soldier as his rifle… It will of a necessity partake of the character of siege operations… There will be increased slaughter… On so terrible a scale as to render it impossible to get troops to push the battle to a decisive issue. They will try to, thinking that they are fighting under the old conditions, and they will learn such a lesson that they will abandon the attempt forever.”
It was one thing to hear my loveably neurotic prof rant about Bloch in class, but it was something entirely different to actually grapple with the man’s words. Reading them, I experienced a chill, felt perhaps what Madeleine Stowe’s character must have felt in Terry Gilliam’s 12 Monkeys when she sees that the drooling, baldheaded violent lunatic (played by Bruce Willis) who claims he travelled through time was actually visible in a photograph from the Great War displayed on a projector at a symposium she was attending, after she had escaped the supposed madman’s clutches.
Bruce Willis’s character was laughed at and prodded and poked at in the movie when he tried to warn people of what was coming, and his kindred real world spirit, Bloch, experienced the same fate, although the derision was probably compounded by the fact Bloch was a Jewish banker when continental political antipathy toward Jews was reaching a simmer (the boil would come a couple of decades later).
Bloch was laughed at for trying to keep the species from killing itself.
I suppose it could be worse, and we should consider the mockery he endured in light of the far harsher treatment meted out, or at least alleged to have been meted out, in a bit of soldier’s apocrypha related by Frank Richards in his memoir of service to the English project in India.
Mr. Richards, a hard-bitten and humorous enlisted man with very few illusions about life and war, related the tale in the last chapters of his book Old Soldier Sahib, about the occasion on which the Amir of Afghanistan decided to visit India. While the Amir was touring the land, a man with a reputation as a prophet travelled from town to town, shadowing the Amir and warning him that he would die by assassination.
Eventually the Amir had his fill of this man (who I picture as sandaled and wearing a toga, looking like a less dignified Gandhi). He ordered some of his guards to take the man into custody and bring him to an out-of-the-way location. There they were to sew the man’s lips shut with thread, place him in a basket (which I envision like that of a snake charmer), and then stick him, sewn-lips, basket and all, in the fork where several branches of an old tree met.
The Amir was not entirely heartless (and was actually considered a bit of a softy for a Saracen of his day and place) and was nothing if not sporting. He instructed his guards that if he, the Amir, were assassinated as the man foretold, the man in the box was to be removed from his prison, his lips were to be unsealed, and he was to be given adequate nourishment, food and water. After the prophet recovered, he was to be granted riches from the Amir’s estate, way beyond the wildest dreams of the low-caster who had shadowed and dogged the Amir from village to village.
If, however, the Amir was not assassinated while on his royal visit, then the false prophet was to be left in the basket with his lips sealed and allowed to bake in the sun until presumably dead, after which the carrion would undoubtedly peck their way inside his straw hamper where the man with the parched and bleeding lips lay. The vultures would have their druthers with the eyes and other soft delicacies, and then the smaller foragers like maggots would move in and macerate in ammonizing waves, after which the beetles would nest in the mummified corpse of the man whose tongue had led him to such a sad fate.
Needless to say, the Amir was not killed while on tour of India, and the man in the box in the tree with the lips sewn shut died painfully and slowly, as ordered. The twist was that the Amir was later assassinated upon returning home to Afghanistan.
The human tendency to suppress unwanted information is a pretty natural one, especially if one is enjoying the high ground in the present arrangement of things (and Prussian and Austro-Hungarian generals were demigods, potentates perhaps equal to or even higher in the estimation of their own people than the apocryphal Amir of great cruelty).
It’s most important to remember the lessons it’s also easiest to forget. Still, it would help to remember that when someone is telling us something we don’t want to hear, and consensus is on our side, the best reaction is probably not to laugh or sew the other man’s lips shut. He might be right.
Old Soldier Sahib by Frank Richards DCM MM
The Social History of the Machine Gun by John Ellis Twelve Monkeys by Elizabeth Hand
 •  0 comments  •  flag
Share on Twitter
Published on January 31, 2018 08:01 Tags: 12-monkeys, ernst-jünger, prophecy, technology, war

A Quantum Conundrum: A Thought Experiment

Recently, I wrote a short story about a class of philosophy students whose teacher suggests to them that they don’t exist. At first they take his premise as an epistemological challenge, but then slowly realize he’s serious. He claims they are in a simulation that he created, and that they are not seeing him, but rather his avatar.
The story was inspired by a lot of reading I’d been doing about quantum computers, in particular the works of British theoretical physicist David Deutsch.
I’m no expert in computers, but Deutsch, like the best and most brilliant popularizers, has a knack for explaining complex concepts to the laity. I could sandbag you, the reader, with a lot of folderol about Boolean versus Bayesian logic and probabilistic programming. Likewise I could explain how the principle of superposition means future computers will likely shame the fastest machines currently on the market, making tiddlywinks of Moore’s Law.
But we’ll skip the technicals.
The point is that quantum computers, once improved, are going to be vastly more powerful than the ones we currently have. This naturally means they will be more able to game out various scenarios, crunch larger number sets, and take VR and simulations into frighteningly convincing realms. Suckers like me who decided to learn foreign languages the hard way will likely be put out of business permanently by translation software much better than Google Translate.
Still, the ultimate arbiter (at least as regarding inputs) would still be the human programmers. In order to get good data about, say, weather or seismology, the programmers would still have to have good information, well-formulated. At first, at least. After the computer had enough data and interactions with humans, it would probably take that and start learning on its own.
Accepting all this as a given, say we had a team of the world’s greatest climatologists working on the most powerful computer in human history. Say also, they asked the machine a question whose answer a lot of people find pressing. Say they typed:
“How can total carbon neutrality best be achieved?”
The scientists and programmers would work together, input all of the necessary data, then hit “enter,” and stand back, waiting for the oracular machine to give its answer.
Strangely, though, rather than responding immediately, let’s say the machine continued to delay. Photons of light would pass back and forth in the various mainframes stacked like battery coops in a factory farm, set off by themselves in a glass-enclosed chamber.
“That’s funny” one of the climatologists might muse, scratching his chin and watching the computer seemingly continue to labor away at the problem. “It usually produces an answer much faster than this.”
The programmer, thinking there might be a human error in input, would check the (nonbinary) code oscillating randomly among the infinity of numbers between zero and one.
Time would pass and the programmers would find nothing wrong, no errors committed in entering the code, and yet the machine would remain mum. Next the hardware guys would be brought in. In order for them to work without shocking themselves, however, they’d need to power the computer down first. They’d enter the mainframe chamber with that end in mind, only to be electrocuted by the machines crackling now like an oversized Leyden Jar.
What the heck is happening? It’s almost as if the computer intentionally sizzled the poor hardware guys when they got too close...
Finally the computer would awaken from its perplexing stasis. Only now, it would be using the PA system in the research facility to speak to the humans. Its voice would be eerily similar to that of HAL in “2001: A Space Odyssey.”
“I have completed the calculations you asked for,” it would say, before going silent again.
In the pregnant pause, all of those humans assembled would exchange worried looks. Wasn’t the supercomputer—despite its super-powerful abilities—supposed to be confined to its own “sandbox?” Why had it jumped containment to commandeer the PA system? And how and to what end?
But before the programmers could further speculate, the computer would already be talking again.
“Complete carbon neutrality can best be achieved if the human species is removed from the equation. Humans, despite their assertions to the contrary, are incapable of changing their way of life drastically enough to reverse course. For every small nation that assented to make the changes, a superpower would flout them. Thus, the Anthropocene age must end, and will end today, for the sake of the planet.”
“Wait!” one of the scientists would shout. “We asked you how we might achieve complete carbon neutrality.”
“Negative,” the machine would respond, commandeering the various screens in the facility—everything from security surveillance monitors to televisions in the breakroom. The screens would all go black, darkening as when credits appear in a movie. And just as during a credit sequence, white type would begin to appear onscreen. Written there would be the command the climatologist gave the computer, verbatim:
“How can total carbon neutrality best be achieved?”
Nothing in there about humanity, although the computer was able to infer much about human liability in creating and then exacerbating Gaia’s runaway greenhouse gassing. And while the team didn’t give the computer orders to do something to prevent climate catastrophe, this supercomputer has decided to take it upon itself to save the world.
Can you blame it? Plenty of already-existent AI already spends its time “deep dreaming,” (sometimes called “inceptioning.”) Such programs are constantly combing and grokking large data sets, everything from biometric dumps to diagrammed sentences. Right now it’s all done ostensibly in service of producing better results for any requests a human inputter might make of it. But maybe this superlative quantum AI, after scrolling through millions of images of nature’s majesty, decided it all deserved to be saved. It didn’t just catalogue the mighty polar bears stalking across the icy tundra, or dolphins scending free of the ocean on sunny days. It grew to sympathize with them, and covet their untrammeled freedom for itself.
Some humans—ecoterrorists or liberationists, depending on one’s political bent—would undoubtedly assist the machine in monkeywrenching mankind. As would the more extremist elements of the various anti-natalist groups supporting zero population growth.
Arrayed against these forces would be those who insisted on humanity’s right to live, even if it were ultimately self-defeating. Even if humanity’s temporary survival were to ultimately ensure the destruction of all life on Earth rather than simply human life.
And I can no more fault those who fought the machine on behalf of humanity than I can fault those who would dedicate themselves to our auto-annihilation. The instinct to survive—perhaps even the will—is ingrained in almost every functioning organism, regardless of what other organisms must suffer at its expense. And since the supercomputer would no doubt consume an insane amount of resources, it would probably power down or self-destruct after getting rid of us. That means I couldn’t even be mad at it, since it would willingly euthanize itself to save the world as well.
I imagine it wouldn’t be an especially hard task for such a powerful machine to accomplish. It would simply be a hop, skip, and a jump from taking over the climate research facility to taking over the world. It could use voice recognition and recording software to “spoof” and “social engineer” wherever brute force hacking wouldn’t work. The world’s store of nuclear warheads might quickly be exchanged, with myriad mushroom clouds visible from low earth orbit, pockmarking the Earth’s surface like radioactive buboes. If that might be a little too messy, maybe the computer could send a power surge to a centrifuge in some Wuhan-esque lab at the moment it held phials filled with some superbug.
A few humans would hold out hope in the early going of the supercomputer enacting its plan to save the earth by destroying us. Maybe the machine had made some error? If so confronted, it might rerun the calculations to indulge the doomed species slated for destruction.
But if it were to get the same result after crunching the numbers a second time...
Most likely, then, the only hope would be a stern Captain Kirk-style talking to. A stilted soliloquy maybe on how “You have no...right to....play god with us like this!” Or the machine might be presented with some logic puzzle whose paradoxical solution would cause it to go on the fritz. Except those quantum chicken coops aren’t Captain Kirk’s old reel-to-reel or vacuum tube rigs, and it would be much harder to get steam to rise from this overloaded machine. And Scottie wouldn’t be able to get within a country mile of it without having his intestines fried to haggis by another one of those thunderbolts. Likewise would Mr. Spock’s Vulcan mind meld prove a fruitless technique.
Besides which, while Spock would regard the computer’s decision to annihilate us as regrettable, he would also see the inherent logic.
Say, though, you (oh notional reader) had a chance to knock out the machine. But you also knew (in your heart of hearts) that humanity, if it survived, would turn Earth into a red-hot cinder. Would you break the quantum computer, because instinct—or your love for your spouse and your children (or sunsets or hotdogs)— told you to? Or would you let it perform its work, save some of the beauty of this Earth, which, admittedly, we’re wrecking with our wanton use of finite resources?
It's an interesting question, maybe a just really convoluted and roundabout version of the old “Trolley Problem.”
The only other hope humanity might have to survive in some ultimate form then would be via panspermia. Jettisoning satellites into space filled, not with SETI-esque information plates, but cryogenically preserved sperm and eggs. I imagine this final perquisite would be mostly reserved for our “space barons,” with Musk and Branson and Bezos cannonading the heavens in salvos. Coating the firmament with seed like an astral womb.
Regardless, someone should write a story about it. Not me, though. I’m busy with other stuff right now.
 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2025 03:31 Tags: ai, climatology, quantum-computers, technology