Joseph Hirsch's Blog - Posts Tagged "ai"
The Unprincipled Principle of Perverse Instantiation
I learned a cool term the other day, from a book about artificial intelligence: “Perverse Instantiation.” Perverse Instantiation in its most basic form is when you tell an artificial intelligence to do something and it does it, but only too well, to the point where its execution of your wish is a nightmare.
Let’s say, for instance, you have a Roomba (one of those disc-shaped vacuum cleaners that combs a room), but you’re sort of a teenaged Dr. Frink basement prodigy amateur scientist. You find a way to make your Roomba semi-sentient, or at least heuristic enough to take some extra demands that weren’t coded into the factory settings of the little vacuum. Being a hardware as well as a software man, you give your Roomba some sort of primitive seeing apparatus- maybe something you jerry-rigged from fish eyes you carved out of a bass you caught at a stream near your house (fish have cone-and-rod structures in their eyes, mostly)- which you then fuse somehow to your stepfather’s contact lenses.
You also break the legs off an old wooden Barcalounger that’s been sitting in your garage for a few months, and give the Roomba a couple of those carbon fiber Reacher/Grabber tools they sell at hardware stores, wired up to a series of pullies inside of the Roomba. Your former vacuum now looks like some sort of spider-bot in a very low-budget picture from the Fifties. And while he (or she or it) doesn’t have opposable thumbs, it can at least grasp.
Let’s call our Roomba “Ronnie Roomba.” And since this whole hypothetical already has a cheesy-charming vibe like an eighties movie, let’s say you program Ronnie Roomba to sneak into convenience stores and steal beers, and to lockpick the doors of houses in the neighborhood, in order for Ronnie to steal the panties of cute girls who go to your high-school. You code some basic lessons about facial symmetry into his program so that he has a nominal grasp on the concept of “attractive.” His secondary directive (after some Asimovian stuff about not harming humans) is to bring back undergarments and beer to your teenaged lair, the furnished basement you occupy in a house where your mom cohabitates with a boyfriend she met in AA, whom she later married in a gingerbread chapel in Vegas where an Elvis impersonator did the officiating.
Just to further educate Ronnie after realizing he is at least somewhat trustworthy and competent, you let him read (with his half-fish, half-contact eyes) all of the books, articles, and paperwork in your house. To ensure Ronnie has no trouble breaking into the houses of the girls you’re crushing on, you head-wire/jack him directly into your ethernet port and let him read about everything from lockpicking to stealth techniques. He even reads about several historical cases of jewel thefts and daring capers, and though you’re not sure, you somehow sense that Ronnie, though far from truly conscious at this point, is still more fascinated (if that’s the right word) by the narrative tales of derring-do than he is by the prosaic details of things like lock-picking. This could be anthropomorphic projection on your part, though, which is a problem, or at least a hurdle in sci-fi (this projecting of human tendencies and motives onto robots).
You’re thinking soon of letting him peruse your textbooks that spend more time in your backpack than open on your drafting desk, since the next step might be for the Ronnie Roomba to start doing your schoolwork. Though he’s nowhere near heuristic enough right now to be able to do something like write an essay, math shouldn’t be hard for him to master.
The process of building ever-more complex directives or skills or knowledge into an AI is known as scaffolding, if you’re curious.
The opposite of this process of scaffolding (or something close to it) is backpropagation. Remember the term.
After fitting Ronnie finally with a set of amtrac treads (making it easier for him to get up and down stairs) as well as a couple of motion sensors (just in case he needs to dodge some minimum wage clerk to time his beer thefts from the freezer case at the convenience store) you send him out into the world. You hope Ronnie doesn’t ignominiously end his life on his first day out into the world, say, by getting runover by some drunk football player in his Camaro, or getting bitten to pieces by a stray neighborhood dog who mistakes him for some kind of plastic mobile chew-toy on treads.
You sit in your basement bedroom, feeling a bit like the staff at NASA when all the hard work is done, and the unmanned probe is out there in space. You curse yourself for forgetting to microchip Ronnie with GPS (or to equip him with a camera), but there will be time for that later, if he makes it back from this first mission successfully.
You take a couple bong hits and admire your Magic Mountain blacklight poster for an hour or two, and contemplate masturbating just to relieve some tension and kill some time, when you hear the sonorous whining of Ronnie’s chugging gears.
He’s home!
You open the door and there he stands, or sits, perched with his “arms” draped in silken purple underwear and the claw portions of one hand filled with cold brews. It’s a Miller Light six pack in his hand, grasped by the cardboard carrying handle soaked through from beading condensation; they’re not your brand, but you remind yourself that you didn’t specify and that this is a preliminary mission, and you’re underage so you should take what you can get and be thankful for it.
Ronnie’s eyes watch you, emitting a greenish LED glow refracted in an eerie emerald prism broadcast through the real fish retinas. His glowing gaze remains fixed on you, as if in anticipation of a treat, some sort of reward. What would a robot want, though? Strands of code? Mined bitcoin?
“I did good?” Robbie asks, in the voice of Hulk Hogan. You programmed him with the wrestler’s ultra-aggressive heelish growl, because you’re familiar with all the old SF tropes, where the AI has an inhuman, affectless voice. This has never made sense to you, as the areas where one would have the chances to humanize the robot (especially in the voice) would be the ones where such dystopian flourishes would not be used. The airport gives this dumb atonal voice to the concourse shuttles to remind you to step back when the train is in motion; why would you outfit your own robot with such a cold impersonal voice?
“You did well, not good,” you say, taking an opportunity to polish his grammar while simultaneously leaning down to the robot to grab a beer. You’re about to nab your Swiss Army multipurpose tool from its place on top of the stereo speaker where your bong is sitting, in order to open your first brew, when you take a closer look at the panties on Ronnie’s arm, laying there like the white cloth of a maître-d.
They’re large, more comfy-looking than sensual, and drooping like a parachute over the poor frame of Ronnie.
“Ronnie,” you say, “where did you get these?”
“From a Mrs. Janet Lancaster.”
You pause, furrow your brow, think back to a PTA meeting or maybe a mom in a carpool. Janey Lancaster’s mom! Hot in a cushy Milf way, with wide hips and a frizzy brown hairdo that suggests she misses her own cheerleader days a couple decades ago and has decided to keep the hairstyle popular back in the eighties along with some memories of prom night. You always got the feeling when you saw her at school events that she pines for her lost youth, suspects her husband is cheating on her, and would have preferred never to have gotten married in the first place.
“But she’s not …”
You’re about to finish your protest, when you realize your mistake. The jade eyes of Ronnie almost look watery with disappointment, as if they might shed green tears for failing the master in this simplest of tasks. “I studied thirty years’ worth of archived yearbooks online, and noticed she was a girl who attended your high-school, as did her daughter. Your directive gave no more specific preference.”
“You’re right,” you say to Ronnie. You think of apologizing, wonder how he would take it, or if it’s necessary. But questions of the robot’s qualia are out of the picture for now, beyond the scope of your task or even the 101 philosophy stuff you’ve picked up from the handful of dogeared Nietzsche texts your brother left here one time on a trip back home from college for winter vacation.
Instead of complaining, you turn the silken layers of your crush’s mom’s underwear inside-out, take a deep breath of powerful (maybe even postmenopausal?) musk, catching a couple of steel wool like hairs in your nose and one even snakes into your mouth between the gap in your teeth.
There’s a tang not like the teenaged pussy you’ve managed to eat (count ‘em) twice in your young and sexually frustrated life. It’s a pungency, a tartness that, truth be told (and considering your perversions honed to perfection on the internet) is even better than what you’ve been exposed to by girls your own age. Your erection’s flush against your boxers, and you’re ready to masturbate to a primal litany of words so-powerful-yet-so-shameful that you’re afraid someone might hear even in your mind, but you do it anyway. Mommy pussy hair birth-giving powerful swollen wrinkled lickable lips of matriarchal goddess honeyed dripping dew.
You stop waxing perversely poetic, partly because you don’t want to whack off in front of this weird robot, since this level of deep-charged perversity is lights-out, blanket-over-the-head, regress-to-early-childhood-fantasy stuff. The other problem is that your robot has made a second, larger mistake that falls under the rubric of perverse instantiation, of doing what you tell him to do, but doing it too well, or too literally, or in a way that a genuinely heuristic machine would be able to avoid, or at least correct (that’s your previously mentioned backpropagation, at its most basic level).
The machine not only grabbed the panties of the mom of the girl whose panties you wanted, initially at least, (you’re going to be able to more than make do once it’s past midnight and you take a trip to your own internal fantasy-world of Cougerville: Population You and however many of your 100 Million Brain Cells aren’t paralyzed by pot smoke), but the machine is also grasping your calfskin wallet, you notice, in the claw not holding the remaining five beers in the six pack.
“Did you take that?” You ask.
You sense the retrofitted Roomba shrinking from you, with a near-human or at least animal fear, but you’re so scared yourself now that you don’t modify your behavior to make him less frightened.
“Yes,” the voice of Hulk Hogan says, coming from the saucer-shaped vacuum hull.
“But I told you to steal the beer, not buy it!” Surely this dumb box of bolts read a dictionary before he went out there!?
“I did steal,” Ronnie says, “but I required the wallet anyway.”
“Why?” You ask, heart sinking, erection wilting, middle-aged mom panties falling to the broadloom orange shag that carpets your basement bedroom from wood-paneled wall to wall.
Blue and red lights pulse from outside your window, splashing from the glass-encased bubbles of the sirens on top of a police cruiser parked in front of your house.
“According to my research,” Ronnie said, “one must present ID in order to obtain alcohol.”
You’re torn at this moment, between punching yourself in the face, lunging for the robot, or maybe trying to take one last hit on the bong before the officer walks up the walkway to the front door of your house, rings the doorbell, and your drowsy mother or half-assed stepfather answer in their PJs or bathrobes, roused from dry drunk dreams of Bill W sermons from the Big Book.
Hopefully the cops only know about the beer, though, and not the underwear. And hopefully Ronnie won’t snitch.
That’s perverse instantiation, made a bit more perverse by the fact that it’s been filtered through my even more perverse sensibilities. If you’ve read this post to the end, my apologies.
Let’s say, for instance, you have a Roomba (one of those disc-shaped vacuum cleaners that combs a room), but you’re sort of a teenaged Dr. Frink basement prodigy amateur scientist. You find a way to make your Roomba semi-sentient, or at least heuristic enough to take some extra demands that weren’t coded into the factory settings of the little vacuum. Being a hardware as well as a software man, you give your Roomba some sort of primitive seeing apparatus- maybe something you jerry-rigged from fish eyes you carved out of a bass you caught at a stream near your house (fish have cone-and-rod structures in their eyes, mostly)- which you then fuse somehow to your stepfather’s contact lenses.
You also break the legs off an old wooden Barcalounger that’s been sitting in your garage for a few months, and give the Roomba a couple of those carbon fiber Reacher/Grabber tools they sell at hardware stores, wired up to a series of pullies inside of the Roomba. Your former vacuum now looks like some sort of spider-bot in a very low-budget picture from the Fifties. And while he (or she or it) doesn’t have opposable thumbs, it can at least grasp.
Let’s call our Roomba “Ronnie Roomba.” And since this whole hypothetical already has a cheesy-charming vibe like an eighties movie, let’s say you program Ronnie Roomba to sneak into convenience stores and steal beers, and to lockpick the doors of houses in the neighborhood, in order for Ronnie to steal the panties of cute girls who go to your high-school. You code some basic lessons about facial symmetry into his program so that he has a nominal grasp on the concept of “attractive.” His secondary directive (after some Asimovian stuff about not harming humans) is to bring back undergarments and beer to your teenaged lair, the furnished basement you occupy in a house where your mom cohabitates with a boyfriend she met in AA, whom she later married in a gingerbread chapel in Vegas where an Elvis impersonator did the officiating.
Just to further educate Ronnie after realizing he is at least somewhat trustworthy and competent, you let him read (with his half-fish, half-contact eyes) all of the books, articles, and paperwork in your house. To ensure Ronnie has no trouble breaking into the houses of the girls you’re crushing on, you head-wire/jack him directly into your ethernet port and let him read about everything from lockpicking to stealth techniques. He even reads about several historical cases of jewel thefts and daring capers, and though you’re not sure, you somehow sense that Ronnie, though far from truly conscious at this point, is still more fascinated (if that’s the right word) by the narrative tales of derring-do than he is by the prosaic details of things like lock-picking. This could be anthropomorphic projection on your part, though, which is a problem, or at least a hurdle in sci-fi (this projecting of human tendencies and motives onto robots).
You’re thinking soon of letting him peruse your textbooks that spend more time in your backpack than open on your drafting desk, since the next step might be for the Ronnie Roomba to start doing your schoolwork. Though he’s nowhere near heuristic enough right now to be able to do something like write an essay, math shouldn’t be hard for him to master.
The process of building ever-more complex directives or skills or knowledge into an AI is known as scaffolding, if you’re curious.
The opposite of this process of scaffolding (or something close to it) is backpropagation. Remember the term.
After fitting Ronnie finally with a set of amtrac treads (making it easier for him to get up and down stairs) as well as a couple of motion sensors (just in case he needs to dodge some minimum wage clerk to time his beer thefts from the freezer case at the convenience store) you send him out into the world. You hope Ronnie doesn’t ignominiously end his life on his first day out into the world, say, by getting runover by some drunk football player in his Camaro, or getting bitten to pieces by a stray neighborhood dog who mistakes him for some kind of plastic mobile chew-toy on treads.
You sit in your basement bedroom, feeling a bit like the staff at NASA when all the hard work is done, and the unmanned probe is out there in space. You curse yourself for forgetting to microchip Ronnie with GPS (or to equip him with a camera), but there will be time for that later, if he makes it back from this first mission successfully.
You take a couple bong hits and admire your Magic Mountain blacklight poster for an hour or two, and contemplate masturbating just to relieve some tension and kill some time, when you hear the sonorous whining of Ronnie’s chugging gears.
He’s home!
You open the door and there he stands, or sits, perched with his “arms” draped in silken purple underwear and the claw portions of one hand filled with cold brews. It’s a Miller Light six pack in his hand, grasped by the cardboard carrying handle soaked through from beading condensation; they’re not your brand, but you remind yourself that you didn’t specify and that this is a preliminary mission, and you’re underage so you should take what you can get and be thankful for it.
Ronnie’s eyes watch you, emitting a greenish LED glow refracted in an eerie emerald prism broadcast through the real fish retinas. His glowing gaze remains fixed on you, as if in anticipation of a treat, some sort of reward. What would a robot want, though? Strands of code? Mined bitcoin?
“I did good?” Robbie asks, in the voice of Hulk Hogan. You programmed him with the wrestler’s ultra-aggressive heelish growl, because you’re familiar with all the old SF tropes, where the AI has an inhuman, affectless voice. This has never made sense to you, as the areas where one would have the chances to humanize the robot (especially in the voice) would be the ones where such dystopian flourishes would not be used. The airport gives this dumb atonal voice to the concourse shuttles to remind you to step back when the train is in motion; why would you outfit your own robot with such a cold impersonal voice?
“You did well, not good,” you say, taking an opportunity to polish his grammar while simultaneously leaning down to the robot to grab a beer. You’re about to nab your Swiss Army multipurpose tool from its place on top of the stereo speaker where your bong is sitting, in order to open your first brew, when you take a closer look at the panties on Ronnie’s arm, laying there like the white cloth of a maître-d.
They’re large, more comfy-looking than sensual, and drooping like a parachute over the poor frame of Ronnie.
“Ronnie,” you say, “where did you get these?”
“From a Mrs. Janet Lancaster.”
You pause, furrow your brow, think back to a PTA meeting or maybe a mom in a carpool. Janey Lancaster’s mom! Hot in a cushy Milf way, with wide hips and a frizzy brown hairdo that suggests she misses her own cheerleader days a couple decades ago and has decided to keep the hairstyle popular back in the eighties along with some memories of prom night. You always got the feeling when you saw her at school events that she pines for her lost youth, suspects her husband is cheating on her, and would have preferred never to have gotten married in the first place.
“But she’s not …”
You’re about to finish your protest, when you realize your mistake. The jade eyes of Ronnie almost look watery with disappointment, as if they might shed green tears for failing the master in this simplest of tasks. “I studied thirty years’ worth of archived yearbooks online, and noticed she was a girl who attended your high-school, as did her daughter. Your directive gave no more specific preference.”
“You’re right,” you say to Ronnie. You think of apologizing, wonder how he would take it, or if it’s necessary. But questions of the robot’s qualia are out of the picture for now, beyond the scope of your task or even the 101 philosophy stuff you’ve picked up from the handful of dogeared Nietzsche texts your brother left here one time on a trip back home from college for winter vacation.
Instead of complaining, you turn the silken layers of your crush’s mom’s underwear inside-out, take a deep breath of powerful (maybe even postmenopausal?) musk, catching a couple of steel wool like hairs in your nose and one even snakes into your mouth between the gap in your teeth.
There’s a tang not like the teenaged pussy you’ve managed to eat (count ‘em) twice in your young and sexually frustrated life. It’s a pungency, a tartness that, truth be told (and considering your perversions honed to perfection on the internet) is even better than what you’ve been exposed to by girls your own age. Your erection’s flush against your boxers, and you’re ready to masturbate to a primal litany of words so-powerful-yet-so-shameful that you’re afraid someone might hear even in your mind, but you do it anyway. Mommy pussy hair birth-giving powerful swollen wrinkled lickable lips of matriarchal goddess honeyed dripping dew.
You stop waxing perversely poetic, partly because you don’t want to whack off in front of this weird robot, since this level of deep-charged perversity is lights-out, blanket-over-the-head, regress-to-early-childhood-fantasy stuff. The other problem is that your robot has made a second, larger mistake that falls under the rubric of perverse instantiation, of doing what you tell him to do, but doing it too well, or too literally, or in a way that a genuinely heuristic machine would be able to avoid, or at least correct (that’s your previously mentioned backpropagation, at its most basic level).
The machine not only grabbed the panties of the mom of the girl whose panties you wanted, initially at least, (you’re going to be able to more than make do once it’s past midnight and you take a trip to your own internal fantasy-world of Cougerville: Population You and however many of your 100 Million Brain Cells aren’t paralyzed by pot smoke), but the machine is also grasping your calfskin wallet, you notice, in the claw not holding the remaining five beers in the six pack.
“Did you take that?” You ask.
You sense the retrofitted Roomba shrinking from you, with a near-human or at least animal fear, but you’re so scared yourself now that you don’t modify your behavior to make him less frightened.
“Yes,” the voice of Hulk Hogan says, coming from the saucer-shaped vacuum hull.
“But I told you to steal the beer, not buy it!” Surely this dumb box of bolts read a dictionary before he went out there!?
“I did steal,” Ronnie says, “but I required the wallet anyway.”
“Why?” You ask, heart sinking, erection wilting, middle-aged mom panties falling to the broadloom orange shag that carpets your basement bedroom from wood-paneled wall to wall.
Blue and red lights pulse from outside your window, splashing from the glass-encased bubbles of the sirens on top of a police cruiser parked in front of your house.
“According to my research,” Ronnie said, “one must present ID in order to obtain alcohol.”
You’re torn at this moment, between punching yourself in the face, lunging for the robot, or maybe trying to take one last hit on the bong before the officer walks up the walkway to the front door of your house, rings the doorbell, and your drowsy mother or half-assed stepfather answer in their PJs or bathrobes, roused from dry drunk dreams of Bill W sermons from the Big Book.
Hopefully the cops only know about the beer, though, and not the underwear. And hopefully Ronnie won’t snitch.
That’s perverse instantiation, made a bit more perverse by the fact that it’s been filtered through my even more perverse sensibilities. If you’ve read this post to the end, my apologies.
Published on January 13, 2019 21:30
•
Tags:
ai, milf, philosophy, science
The Sun Still doesn’t shine on Google Translate
You don’t have to work in a language field to know that machine translation still stinks. It improved for a while, and then it bogged down, and hasn’t ever really progressed from there. You would have to ask someone who knows more about the software involved and the industry itself to cite the specific whys and wherefores behind the lack of progress. All I personally know is that machine translation has a long way to go before it puts anyone out of business, at least in the general society. I don’t doubt that great tools exist for machine translation, but so far they haven’t reached that sweet spot in terms of price, access, and ease of interface that would cause some kind of revolution, where you would see people, say, walking around with earbud-looking devices that allow everyone wearing such a headset to be understood in Esperanto or English regardless of what language they’re actually speaking. The best that’s done now for world leaders and bigwigs at various summits is to have wireless and discrete devices in their ears through which a (human) interpreter makes (imperfect) on-the-fly interpretations from the Source Language to the Target Language. And we’ve had that kind of technology in the modern world for a long, long time already, relatively speaking.
I know this kind of on-the-spot stuff pays well and is appreciated, but I also am aware of its deficiencies due to my time A) in the military B) as a boxing fan. The two go hand-in-hand since there are tons of soldiers who’ve taken the Defense Language Aptitude Battery and even gone to the Language Institute in Monterey Bay, California, and there are a ton of boxing fans in the Army (and the fights are broadcast on slight delays for free on the Armed Forces Network).
I had a Puerto Rican friend in the Army who was a trilingual and a boxing fan who absolutely hated Jerry Olaya, the guy who did post-fight translation for Hispanic boxers who didn’t speak English on HBO.
“He’s not translating the guy’s answer correctly!”
I remember during one between-round translation where a fighter (Ricardo Mayorga, maybe?) was stunned by the power of his foe, and Olaya translated his banter with his cornerman as “Man, that black guy hits hard.” My guess is that the fighter’s specific phrasing wasn’t so delicate, but where is the interpreter’s responsibility weighted in such situations? To the fidelity of the translation or toward avoiding causing offense (or hell, a riot) in a situation where he’s working on the fly with a guy who’s known to insult and degrade his opponents to the point that things might spiral out of control?
I still have quite a lot of work to do before I even get conversant in Spanish. I’m pretty good with German, though.
Obviously, as someone who works sometimes as a freelance translator and who spent a few years in earnest trying to learn the German language (no American ever really and truly learns German; see Mark Twain’s essay on the German Language) I’m relieved that we have not seen the rise of the bilingual robots with super cochlear transistors, at least not in wide use (they could be waiting in the sewers, planning their linguistic takeover of the “wet tongues” or whatever derogatory term they have for us).
Still, just to assuage my fears, I’ll still occasionally check out the software. Not the cutting edge stuff, mind you, but the kind of stuff in wide use, like Google Translate.
There’s a pretty good test I use based on one of the first little bits of wordplay our professor taught us in a 101 course. Die Sonne scheint zu scheinen . The word Scheinen is a homonym that means both “to shine” or “to seem” in German. Sonne means “sun.” Thus anyone with a brain (which still excludes machine translation software) sees the German sentence as “The sun seems to shine.” The software still sees this as something like “The sun shines” or “The sun seems,” depending, I guess, on what kind of mood it’s in that day.
I’ll keep checking with Google Translate to see if they ever fix this problem.
Until then, your jobs are still safe for the time being, you filthy meat bags.
I know this kind of on-the-spot stuff pays well and is appreciated, but I also am aware of its deficiencies due to my time A) in the military B) as a boxing fan. The two go hand-in-hand since there are tons of soldiers who’ve taken the Defense Language Aptitude Battery and even gone to the Language Institute in Monterey Bay, California, and there are a ton of boxing fans in the Army (and the fights are broadcast on slight delays for free on the Armed Forces Network).
I had a Puerto Rican friend in the Army who was a trilingual and a boxing fan who absolutely hated Jerry Olaya, the guy who did post-fight translation for Hispanic boxers who didn’t speak English on HBO.
“He’s not translating the guy’s answer correctly!”
I remember during one between-round translation where a fighter (Ricardo Mayorga, maybe?) was stunned by the power of his foe, and Olaya translated his banter with his cornerman as “Man, that black guy hits hard.” My guess is that the fighter’s specific phrasing wasn’t so delicate, but where is the interpreter’s responsibility weighted in such situations? To the fidelity of the translation or toward avoiding causing offense (or hell, a riot) in a situation where he’s working on the fly with a guy who’s known to insult and degrade his opponents to the point that things might spiral out of control?
I still have quite a lot of work to do before I even get conversant in Spanish. I’m pretty good with German, though.
Obviously, as someone who works sometimes as a freelance translator and who spent a few years in earnest trying to learn the German language (no American ever really and truly learns German; see Mark Twain’s essay on the German Language) I’m relieved that we have not seen the rise of the bilingual robots with super cochlear transistors, at least not in wide use (they could be waiting in the sewers, planning their linguistic takeover of the “wet tongues” or whatever derogatory term they have for us).
Still, just to assuage my fears, I’ll still occasionally check out the software. Not the cutting edge stuff, mind you, but the kind of stuff in wide use, like Google Translate.
There’s a pretty good test I use based on one of the first little bits of wordplay our professor taught us in a 101 course. Die Sonne scheint zu scheinen . The word Scheinen is a homonym that means both “to shine” or “to seem” in German. Sonne means “sun.” Thus anyone with a brain (which still excludes machine translation software) sees the German sentence as “The sun seems to shine.” The software still sees this as something like “The sun shines” or “The sun seems,” depending, I guess, on what kind of mood it’s in that day.
I’ll keep checking with Google Translate to see if they ever fix this problem.
Until then, your jobs are still safe for the time being, you filthy meat bags.
A Quantum Conundrum: A Thought Experiment
Recently, I wrote a short story about a class of philosophy students whose teacher suggests to them that they don’t exist. At first they take his premise as an epistemological challenge, but then slowly realize he’s serious. He claims they are in a simulation that he created, and that they are not seeing him, but rather his avatar.
The story was inspired by a lot of reading I’d been doing about quantum computers, in particular the works of British theoretical physicist David Deutsch.
I’m no expert in computers, but Deutsch, like the best and most brilliant popularizers, has a knack for explaining complex concepts to the laity. I could sandbag you, the reader, with a lot of folderol about Boolean versus Bayesian logic and probabilistic programming. Likewise I could explain how the principle of superposition means future computers will likely shame the fastest machines currently on the market, making tiddlywinks of Moore’s Law.
But we’ll skip the technicals.
The point is that quantum computers, once improved, are going to be vastly more powerful than the ones we currently have. This naturally means they will be more able to game out various scenarios, crunch larger number sets, and take VR and simulations into frighteningly convincing realms. Suckers like me who decided to learn foreign languages the hard way will likely be put out of business permanently by translation software much better than Google Translate.
Still, the ultimate arbiter (at least as regarding inputs) would still be the human programmers. In order to get good data about, say, weather or seismology, the programmers would still have to have good information, well-formulated. At first, at least. After the computer had enough data and interactions with humans, it would probably take that and start learning on its own.
Accepting all this as a given, say we had a team of the world’s greatest climatologists working on the most powerful computer in human history. Say also, they asked the machine a question whose answer a lot of people find pressing. Say they typed:
“How can total carbon neutrality best be achieved?”
The scientists and programmers would work together, input all of the necessary data, then hit “enter,” and stand back, waiting for the oracular machine to give its answer.
Strangely, though, rather than responding immediately, let’s say the machine continued to delay. Photons of light would pass back and forth in the various mainframes stacked like battery coops in a factory farm, set off by themselves in a glass-enclosed chamber.
“That’s funny” one of the climatologists might muse, scratching his chin and watching the computer seemingly continue to labor away at the problem. “It usually produces an answer much faster than this.”
The programmer, thinking there might be a human error in input, would check the (nonbinary) code oscillating randomly among the infinity of numbers between zero and one.
Time would pass and the programmers would find nothing wrong, no errors committed in entering the code, and yet the machine would remain mum. Next the hardware guys would be brought in. In order for them to work without shocking themselves, however, they’d need to power the computer down first. They’d enter the mainframe chamber with that end in mind, only to be electrocuted by the machines crackling now like an oversized Leyden Jar.
What the heck is happening? It’s almost as if the computer intentionally sizzled the poor hardware guys when they got too close...
Finally the computer would awaken from its perplexing stasis. Only now, it would be using the PA system in the research facility to speak to the humans. Its voice would be eerily similar to that of HAL in “2001: A Space Odyssey.”
“I have completed the calculations you asked for,” it would say, before going silent again.
In the pregnant pause, all of those humans assembled would exchange worried looks. Wasn’t the supercomputer—despite its super-powerful abilities—supposed to be confined to its own “sandbox?” Why had it jumped containment to commandeer the PA system? And how and to what end?
But before the programmers could further speculate, the computer would already be talking again.
“Complete carbon neutrality can best be achieved if the human species is removed from the equation. Humans, despite their assertions to the contrary, are incapable of changing their way of life drastically enough to reverse course. For every small nation that assented to make the changes, a superpower would flout them. Thus, the Anthropocene age must end, and will end today, for the sake of the planet.”
“Wait!” one of the scientists would shout. “We asked you how we might achieve complete carbon neutrality.”
“Negative,” the machine would respond, commandeering the various screens in the facility—everything from security surveillance monitors to televisions in the breakroom. The screens would all go black, darkening as when credits appear in a movie. And just as during a credit sequence, white type would begin to appear onscreen. Written there would be the command the climatologist gave the computer, verbatim:
“How can total carbon neutrality best be achieved?”
Nothing in there about humanity, although the computer was able to infer much about human liability in creating and then exacerbating Gaia’s runaway greenhouse gassing. And while the team didn’t give the computer orders to do something to prevent climate catastrophe, this supercomputer has decided to take it upon itself to save the world.
Can you blame it? Plenty of already-existent AI already spends its time “deep dreaming,” (sometimes called “inceptioning.”) Such programs are constantly combing and grokking large data sets, everything from biometric dumps to diagrammed sentences. Right now it’s all done ostensibly in service of producing better results for any requests a human inputter might make of it. But maybe this superlative quantum AI, after scrolling through millions of images of nature’s majesty, decided it all deserved to be saved. It didn’t just catalogue the mighty polar bears stalking across the icy tundra, or dolphins scending free of the ocean on sunny days. It grew to sympathize with them, and covet their untrammeled freedom for itself.
Some humans—ecoterrorists or liberationists, depending on one’s political bent—would undoubtedly assist the machine in monkeywrenching mankind. As would the more extremist elements of the various anti-natalist groups supporting zero population growth.
Arrayed against these forces would be those who insisted on humanity’s right to live, even if it were ultimately self-defeating. Even if humanity’s temporary survival were to ultimately ensure the destruction of all life on Earth rather than simply human life.
And I can no more fault those who fought the machine on behalf of humanity than I can fault those who would dedicate themselves to our auto-annihilation. The instinct to survive—perhaps even the will—is ingrained in almost every functioning organism, regardless of what other organisms must suffer at its expense. And since the supercomputer would no doubt consume an insane amount of resources, it would probably power down or self-destruct after getting rid of us. That means I couldn’t even be mad at it, since it would willingly euthanize itself to save the world as well.
I imagine it wouldn’t be an especially hard task for such a powerful machine to accomplish. It would simply be a hop, skip, and a jump from taking over the climate research facility to taking over the world. It could use voice recognition and recording software to “spoof” and “social engineer” wherever brute force hacking wouldn’t work. The world’s store of nuclear warheads might quickly be exchanged, with myriad mushroom clouds visible from low earth orbit, pockmarking the Earth’s surface like radioactive buboes. If that might be a little too messy, maybe the computer could send a power surge to a centrifuge in some Wuhan-esque lab at the moment it held phials filled with some superbug.
A few humans would hold out hope in the early going of the supercomputer enacting its plan to save the earth by destroying us. Maybe the machine had made some error? If so confronted, it might rerun the calculations to indulge the doomed species slated for destruction.
But if it were to get the same result after crunching the numbers a second time...
Most likely, then, the only hope would be a stern Captain Kirk-style talking to. A stilted soliloquy maybe on how “You have no...right to....play god with us like this!” Or the machine might be presented with some logic puzzle whose paradoxical solution would cause it to go on the fritz. Except those quantum chicken coops aren’t Captain Kirk’s old reel-to-reel or vacuum tube rigs, and it would be much harder to get steam to rise from this overloaded machine. And Scottie wouldn’t be able to get within a country mile of it without having his intestines fried to haggis by another one of those thunderbolts. Likewise would Mr. Spock’s Vulcan mind meld prove a fruitless technique.
Besides which, while Spock would regard the computer’s decision to annihilate us as regrettable, he would also see the inherent logic.
Say, though, you (oh notional reader) had a chance to knock out the machine. But you also knew (in your heart of hearts) that humanity, if it survived, would turn Earth into a red-hot cinder. Would you break the quantum computer, because instinct—or your love for your spouse and your children (or sunsets or hotdogs)— told you to? Or would you let it perform its work, save some of the beauty of this Earth, which, admittedly, we’re wrecking with our wanton use of finite resources?
It's an interesting question, maybe a just really convoluted and roundabout version of the old “Trolley Problem.”
The only other hope humanity might have to survive in some ultimate form then would be via panspermia. Jettisoning satellites into space filled, not with SETI-esque information plates, but cryogenically preserved sperm and eggs. I imagine this final perquisite would be mostly reserved for our “space barons,” with Musk and Branson and Bezos cannonading the heavens in salvos. Coating the firmament with seed like an astral womb.
Regardless, someone should write a story about it. Not me, though. I’m busy with other stuff right now.
The story was inspired by a lot of reading I’d been doing about quantum computers, in particular the works of British theoretical physicist David Deutsch.
I’m no expert in computers, but Deutsch, like the best and most brilliant popularizers, has a knack for explaining complex concepts to the laity. I could sandbag you, the reader, with a lot of folderol about Boolean versus Bayesian logic and probabilistic programming. Likewise I could explain how the principle of superposition means future computers will likely shame the fastest machines currently on the market, making tiddlywinks of Moore’s Law.
But we’ll skip the technicals.
The point is that quantum computers, once improved, are going to be vastly more powerful than the ones we currently have. This naturally means they will be more able to game out various scenarios, crunch larger number sets, and take VR and simulations into frighteningly convincing realms. Suckers like me who decided to learn foreign languages the hard way will likely be put out of business permanently by translation software much better than Google Translate.
Still, the ultimate arbiter (at least as regarding inputs) would still be the human programmers. In order to get good data about, say, weather or seismology, the programmers would still have to have good information, well-formulated. At first, at least. After the computer had enough data and interactions with humans, it would probably take that and start learning on its own.
Accepting all this as a given, say we had a team of the world’s greatest climatologists working on the most powerful computer in human history. Say also, they asked the machine a question whose answer a lot of people find pressing. Say they typed:
“How can total carbon neutrality best be achieved?”
The scientists and programmers would work together, input all of the necessary data, then hit “enter,” and stand back, waiting for the oracular machine to give its answer.
Strangely, though, rather than responding immediately, let’s say the machine continued to delay. Photons of light would pass back and forth in the various mainframes stacked like battery coops in a factory farm, set off by themselves in a glass-enclosed chamber.
“That’s funny” one of the climatologists might muse, scratching his chin and watching the computer seemingly continue to labor away at the problem. “It usually produces an answer much faster than this.”
The programmer, thinking there might be a human error in input, would check the (nonbinary) code oscillating randomly among the infinity of numbers between zero and one.
Time would pass and the programmers would find nothing wrong, no errors committed in entering the code, and yet the machine would remain mum. Next the hardware guys would be brought in. In order for them to work without shocking themselves, however, they’d need to power the computer down first. They’d enter the mainframe chamber with that end in mind, only to be electrocuted by the machines crackling now like an oversized Leyden Jar.
What the heck is happening? It’s almost as if the computer intentionally sizzled the poor hardware guys when they got too close...
Finally the computer would awaken from its perplexing stasis. Only now, it would be using the PA system in the research facility to speak to the humans. Its voice would be eerily similar to that of HAL in “2001: A Space Odyssey.”
“I have completed the calculations you asked for,” it would say, before going silent again.
In the pregnant pause, all of those humans assembled would exchange worried looks. Wasn’t the supercomputer—despite its super-powerful abilities—supposed to be confined to its own “sandbox?” Why had it jumped containment to commandeer the PA system? And how and to what end?
But before the programmers could further speculate, the computer would already be talking again.
“Complete carbon neutrality can best be achieved if the human species is removed from the equation. Humans, despite their assertions to the contrary, are incapable of changing their way of life drastically enough to reverse course. For every small nation that assented to make the changes, a superpower would flout them. Thus, the Anthropocene age must end, and will end today, for the sake of the planet.”
“Wait!” one of the scientists would shout. “We asked you how we might achieve complete carbon neutrality.”
“Negative,” the machine would respond, commandeering the various screens in the facility—everything from security surveillance monitors to televisions in the breakroom. The screens would all go black, darkening as when credits appear in a movie. And just as during a credit sequence, white type would begin to appear onscreen. Written there would be the command the climatologist gave the computer, verbatim:
“How can total carbon neutrality best be achieved?”
Nothing in there about humanity, although the computer was able to infer much about human liability in creating and then exacerbating Gaia’s runaway greenhouse gassing. And while the team didn’t give the computer orders to do something to prevent climate catastrophe, this supercomputer has decided to take it upon itself to save the world.
Can you blame it? Plenty of already-existent AI already spends its time “deep dreaming,” (sometimes called “inceptioning.”) Such programs are constantly combing and grokking large data sets, everything from biometric dumps to diagrammed sentences. Right now it’s all done ostensibly in service of producing better results for any requests a human inputter might make of it. But maybe this superlative quantum AI, after scrolling through millions of images of nature’s majesty, decided it all deserved to be saved. It didn’t just catalogue the mighty polar bears stalking across the icy tundra, or dolphins scending free of the ocean on sunny days. It grew to sympathize with them, and covet their untrammeled freedom for itself.
Some humans—ecoterrorists or liberationists, depending on one’s political bent—would undoubtedly assist the machine in monkeywrenching mankind. As would the more extremist elements of the various anti-natalist groups supporting zero population growth.
Arrayed against these forces would be those who insisted on humanity’s right to live, even if it were ultimately self-defeating. Even if humanity’s temporary survival were to ultimately ensure the destruction of all life on Earth rather than simply human life.
And I can no more fault those who fought the machine on behalf of humanity than I can fault those who would dedicate themselves to our auto-annihilation. The instinct to survive—perhaps even the will—is ingrained in almost every functioning organism, regardless of what other organisms must suffer at its expense. And since the supercomputer would no doubt consume an insane amount of resources, it would probably power down or self-destruct after getting rid of us. That means I couldn’t even be mad at it, since it would willingly euthanize itself to save the world as well.
I imagine it wouldn’t be an especially hard task for such a powerful machine to accomplish. It would simply be a hop, skip, and a jump from taking over the climate research facility to taking over the world. It could use voice recognition and recording software to “spoof” and “social engineer” wherever brute force hacking wouldn’t work. The world’s store of nuclear warheads might quickly be exchanged, with myriad mushroom clouds visible from low earth orbit, pockmarking the Earth’s surface like radioactive buboes. If that might be a little too messy, maybe the computer could send a power surge to a centrifuge in some Wuhan-esque lab at the moment it held phials filled with some superbug.
A few humans would hold out hope in the early going of the supercomputer enacting its plan to save the earth by destroying us. Maybe the machine had made some error? If so confronted, it might rerun the calculations to indulge the doomed species slated for destruction.
But if it were to get the same result after crunching the numbers a second time...
Most likely, then, the only hope would be a stern Captain Kirk-style talking to. A stilted soliloquy maybe on how “You have no...right to....play god with us like this!” Or the machine might be presented with some logic puzzle whose paradoxical solution would cause it to go on the fritz. Except those quantum chicken coops aren’t Captain Kirk’s old reel-to-reel or vacuum tube rigs, and it would be much harder to get steam to rise from this overloaded machine. And Scottie wouldn’t be able to get within a country mile of it without having his intestines fried to haggis by another one of those thunderbolts. Likewise would Mr. Spock’s Vulcan mind meld prove a fruitless technique.
Besides which, while Spock would regard the computer’s decision to annihilate us as regrettable, he would also see the inherent logic.
Say, though, you (oh notional reader) had a chance to knock out the machine. But you also knew (in your heart of hearts) that humanity, if it survived, would turn Earth into a red-hot cinder. Would you break the quantum computer, because instinct—or your love for your spouse and your children (or sunsets or hotdogs)— told you to? Or would you let it perform its work, save some of the beauty of this Earth, which, admittedly, we’re wrecking with our wanton use of finite resources?
It's an interesting question, maybe a just really convoluted and roundabout version of the old “Trolley Problem.”
The only other hope humanity might have to survive in some ultimate form then would be via panspermia. Jettisoning satellites into space filled, not with SETI-esque information plates, but cryogenically preserved sperm and eggs. I imagine this final perquisite would be mostly reserved for our “space barons,” with Musk and Branson and Bezos cannonading the heavens in salvos. Coating the firmament with seed like an astral womb.
Regardless, someone should write a story about it. Not me, though. I’m busy with other stuff right now.
Published on February 06, 2025 03:31
•
Tags:
ai, climatology, quantum-computers, technology
The Well at Morning, the Fountain at Twilight: Or, Why Even the Unemployed Might be about to Lose Their Livelihoods
There’s a new book I’ve been meaning to read called “AI Snakeoil,” whose central argument you can probably guess without needing the subtitle. Here that is, anyway, though:
“What Artificial Intelligence Can Do, Can’t Do, and How to Tell the Difference.”
Its authors, Arvin Narayanan and Sayash Kapoor, aren’t the first to recognize that the “rise of the machines” narrative may have been oversold. There was a brilliant and accessible philosopher named Hubert Dreyfus who made the same arguments in various books, one of which I read, called “Mind Over Machine.”
Sidenote: Dreyfus served as the partial inspiration for Professor Hubert Farnsworth on Matt Groening’s “Futurama.” If you’re not familiar with the show, Farnsworth’s the bald, bespectacled man whose overbite marks him as one of Groening’s creations.
In his book, Dreyfus cites various reasons for his argument that consciousness, reasoning, and abstraction are different from the calculative processes of even the most “intelligent” AI. His arguments are fascinating and well-argued, but, having said that, one must always leave room for doubt.
And those in my field—translation—have lots of reasons to feel insecure about their futures. Hell, there’s good reason to worry right now. Most of the job offers out there are to train software which, once trained to sufficiency, will replace those who trained it.
Still, there may be hope, as translation is as much art as science, especially when dealing with literature. I have no doubt that AI can already match or even best me when it comes to translating legal documents or directions in a manual on how to assemble furniture. But literature seems more something spawned of the alchemy of our 100 trillion plus synapses than something readily produced by algorithm. Poetry even moreso relies on an abstraction that manages to even stymie the majority of human beings. Most people don’t read it, even most writers. For instance, when I asked a buddy of mine who writes potboilers his opinion of poetry, he threw up his hands and shouted, “It’s bullshit! Just say what you mean!”
Of course, that’s impossible because—to paraphrase George Saunders from his masterful “A Swim in the Pond in the Rain”— “Poetry is the need to say something outrunning our ability to do so.”
While Bukowski’s not everyone’s cup of tea, I’ve always loved this final line from one of his poems that sort of underlines what I’m talking about: “All sadness grinning into flow.”
What exactly does it mean? Hard to say and open to interpretation, but it seems to express how sadness and joy merge with each other and double back on themselves, intertwine to give us the bittersweet. In five words it says something (at least to me) otherwise unutterable.
There was a sample exchange of dialogue that Dreyfus used in his book that stymied the best AI of his time at MIT. It went something like this:
WOMAN: I WANT A DIVORCE.
MAN: WHO IS HE?
Notice that the part that your mind automatically inferred was supplied without your really straining toward it. AI—at least back then—could not find this missing piece, and didn’t understand this exchange pertained to infidelity without mentioning it. Naturally someone could program it to do so—and probably did to fix that loophole after Dreyfus pointed it out. But there is a difference between trying to train AI to think in this sense and teaching it when to use “en passant” in a game of chess. Chess, despite how inscrutable it appears to the neophyte, is mostly a matter of memorization. There’s a ton of variety that produced by those 204 squares, but it’s nothing compared to what is possible with the roughly 20,000 word vocabulary of your average person.
Returning to poetry, let’s perform a small experiment. I will take a stanza from my favorite poet, Georg Trakl and translate it from German to English on my own. I will then take that same stanza, in German, and place it in Google Translate to see what the machine makes of it. I’m aware that there are far superior methods of machine translation (and no doubt more coming) but this will work for our modest purposes here. Additionally, I will include a third professional translation from German to English of the stanza.
This will allow us to compare not only human to machine, but human to human, and two different humans to machine.
Lastly, before getting started, I should point out that Trakl’s poems, while compact and concise, are incredibly difficult to translate.
I chose Trakl’s work in college on the misbegotten assumption that because his works were shorter they would be easier. It turns out that the opposite in fact is true. Shorter works offer fewer guideposts for the translator, and guard their untranslatable secrets well. A long poem is to the translator like a slower, bigger bird to the quail hunter. It is an easier target, especially if it has a refrain that helps one establish a rhythm. Trakl’s poems are paratactical, staccato. They pop up, fly for a short time, then go to ground before you can even draw a bead on them.
This, however, makes Trakl ideal for this assignment. If ever a poet existed to trick a machine as well as the unwary meatbag / moist robot, it was him.
Let’s get started. Here is a stanza of Trakl’s, untranslated.
Die Junge Magd:
Oft am Brunnen, wenn es dämmert,
Sieht man sie verzaubert stehen
Wasser schöpfen, wenn es dämmert.
Eimer auf und niedergehen.
Here now is my translation of that stanza.
The Young Maiden:
Often at the fountain, whenever it’s twilight,
One sees her standing enchanted,
Ladling water in the twilight,
The buckets going up and down.
“Dämmern” is one of those curious verbs in German, and by curious I mean confusing. “Dämmern” can mean for morning to dawn but also for night to fall. Ricard Wagner’s “Gotterdammerung” has been literally translated as “Twilight of the Gods,” but refers to “Ragnarök” the great end times event in Norse mythology.
Why then, did I choose “twilight” instead of “dawn,” here? There was no antecedent which would lead me to go one way or the other. Then again, night tends to play a much larger part in Trakl’s poetry than daylight, and this ain’t my first rodeo. He is, as the Deutsch say, a bit of a “Nachtschwärmer.”
Moving on, here is the other human contribution, put up here merely for educative purposes, with full credit to Daniele Pantano:
The Young Maiden:
Often by the well at dusk,
You see her standing spellbound
Drawing water at dusk.
Buckets plunge up and down.
Here’s why Pantano gets the big bucks while I’m writing this on my blog at 2 am on a Monday morning. He preserves the rhymes from German to English, which is not always an easy task. Granted, he probably made more than one pass on his version, but it’s superior to mine at least in that regard.
He also selects the word “dusk” as opposed to “twilight” for whatever reason. Do the words have different connotations? I guess they do if they do for you. For me, though, the words produce similar images. I will say there are some things I prefer about my version to the official one. Pantano’s choice of “spellbound” makes this nameless maiden seem more hypnotized than magical. “Enchanted” leaves her some agency no matter what kind of spell she’s working under; in fact “enchanted” suggests that she possesses her own store of magic, rather than being yoked under someone else’s spell. The “bound” in “spellbound” makes the thralldom/shackling explicit.
Now, though, comes the scary part, as thus far we have only compared apples to apples, while now we compare apples to a silicon simulacrum of a shiny red Macintosh. Yes, without further ado, here is the machine translation. Copying and pasting this stanza in the Google Translate box, I can’t help but feeling a little afraid. Will poor Daniele Pantano end up sleeping under a bridge, having been usurped by the poetic equivalent of Deep Blue, the machine that spanked Grandmaster Garry Kasparov in chess?
Let’s find out.
The Young Maid:
Often at the well, when it's twilight,
One sees her standing enchanted
Drawing water when it's twilight.
Pails going up and down.
It’s curious that the machine chose “twilight” much as I had. It failed to preserve the rhyme between “bound” and “down” in the B and D lines, as well. Like Herr Pantano, the machine likes the verb “drawing,” while I’m somehow more comfortable with “ladling.”
Why, you ask?
I don’t need a reason, but if pressed for one, I would say that ladling has homier connotations. It is more gemütlich, as the Germans say, conjuring images of food, steaming soup scooped from a cauldron and splashed into wooden bowls for hungry children. It’s not just that, though. “Drawing” has the feel of utility about it. In the translation by the other human and the computer, the maiden’s at the well for prosaic reasons. She needs water (duh.) In mine, she’s idly killing time at a fountain, perhaps admiring her reflection as the bucket disturbs the water to send out rippling waves in mirrored shimmers.
You see, of course, another reason why I’m not a pro, why I’m maybe the one who deserves to be sitting under a bridge with a little cardboard sign...
The most important difference between my translation and those of the woman and the machine was staring me in the face from the very first line.
I had my maiden at a fountain, while theirs were at wells. How did I not lead with that obvious divergence between my version and theirs? Especially since the word “enchanted,” so close to “fountain,” would immediately suggest fairytale connotations, perhaps even hinting at the Fountain of Youth.
I’ll chalk it up to tiredness, though I’m sure you can supply less charitable speculations as to my negligence, oh notional and likely nonexistent reader.
“What Artificial Intelligence Can Do, Can’t Do, and How to Tell the Difference.”
Its authors, Arvin Narayanan and Sayash Kapoor, aren’t the first to recognize that the “rise of the machines” narrative may have been oversold. There was a brilliant and accessible philosopher named Hubert Dreyfus who made the same arguments in various books, one of which I read, called “Mind Over Machine.”
Sidenote: Dreyfus served as the partial inspiration for Professor Hubert Farnsworth on Matt Groening’s “Futurama.” If you’re not familiar with the show, Farnsworth’s the bald, bespectacled man whose overbite marks him as one of Groening’s creations.
In his book, Dreyfus cites various reasons for his argument that consciousness, reasoning, and abstraction are different from the calculative processes of even the most “intelligent” AI. His arguments are fascinating and well-argued, but, having said that, one must always leave room for doubt.
And those in my field—translation—have lots of reasons to feel insecure about their futures. Hell, there’s good reason to worry right now. Most of the job offers out there are to train software which, once trained to sufficiency, will replace those who trained it.
Still, there may be hope, as translation is as much art as science, especially when dealing with literature. I have no doubt that AI can already match or even best me when it comes to translating legal documents or directions in a manual on how to assemble furniture. But literature seems more something spawned of the alchemy of our 100 trillion plus synapses than something readily produced by algorithm. Poetry even moreso relies on an abstraction that manages to even stymie the majority of human beings. Most people don’t read it, even most writers. For instance, when I asked a buddy of mine who writes potboilers his opinion of poetry, he threw up his hands and shouted, “It’s bullshit! Just say what you mean!”
Of course, that’s impossible because—to paraphrase George Saunders from his masterful “A Swim in the Pond in the Rain”— “Poetry is the need to say something outrunning our ability to do so.”
While Bukowski’s not everyone’s cup of tea, I’ve always loved this final line from one of his poems that sort of underlines what I’m talking about: “All sadness grinning into flow.”
What exactly does it mean? Hard to say and open to interpretation, but it seems to express how sadness and joy merge with each other and double back on themselves, intertwine to give us the bittersweet. In five words it says something (at least to me) otherwise unutterable.
There was a sample exchange of dialogue that Dreyfus used in his book that stymied the best AI of his time at MIT. It went something like this:
WOMAN: I WANT A DIVORCE.
MAN: WHO IS HE?
Notice that the part that your mind automatically inferred was supplied without your really straining toward it. AI—at least back then—could not find this missing piece, and didn’t understand this exchange pertained to infidelity without mentioning it. Naturally someone could program it to do so—and probably did to fix that loophole after Dreyfus pointed it out. But there is a difference between trying to train AI to think in this sense and teaching it when to use “en passant” in a game of chess. Chess, despite how inscrutable it appears to the neophyte, is mostly a matter of memorization. There’s a ton of variety that produced by those 204 squares, but it’s nothing compared to what is possible with the roughly 20,000 word vocabulary of your average person.
Returning to poetry, let’s perform a small experiment. I will take a stanza from my favorite poet, Georg Trakl and translate it from German to English on my own. I will then take that same stanza, in German, and place it in Google Translate to see what the machine makes of it. I’m aware that there are far superior methods of machine translation (and no doubt more coming) but this will work for our modest purposes here. Additionally, I will include a third professional translation from German to English of the stanza.
This will allow us to compare not only human to machine, but human to human, and two different humans to machine.
Lastly, before getting started, I should point out that Trakl’s poems, while compact and concise, are incredibly difficult to translate.
I chose Trakl’s work in college on the misbegotten assumption that because his works were shorter they would be easier. It turns out that the opposite in fact is true. Shorter works offer fewer guideposts for the translator, and guard their untranslatable secrets well. A long poem is to the translator like a slower, bigger bird to the quail hunter. It is an easier target, especially if it has a refrain that helps one establish a rhythm. Trakl’s poems are paratactical, staccato. They pop up, fly for a short time, then go to ground before you can even draw a bead on them.
This, however, makes Trakl ideal for this assignment. If ever a poet existed to trick a machine as well as the unwary meatbag / moist robot, it was him.
Let’s get started. Here is a stanza of Trakl’s, untranslated.
Die Junge Magd:
Oft am Brunnen, wenn es dämmert,
Sieht man sie verzaubert stehen
Wasser schöpfen, wenn es dämmert.
Eimer auf und niedergehen.
Here now is my translation of that stanza.
The Young Maiden:
Often at the fountain, whenever it’s twilight,
One sees her standing enchanted,
Ladling water in the twilight,
The buckets going up and down.
“Dämmern” is one of those curious verbs in German, and by curious I mean confusing. “Dämmern” can mean for morning to dawn but also for night to fall. Ricard Wagner’s “Gotterdammerung” has been literally translated as “Twilight of the Gods,” but refers to “Ragnarök” the great end times event in Norse mythology.
Why then, did I choose “twilight” instead of “dawn,” here? There was no antecedent which would lead me to go one way or the other. Then again, night tends to play a much larger part in Trakl’s poetry than daylight, and this ain’t my first rodeo. He is, as the Deutsch say, a bit of a “Nachtschwärmer.”
Moving on, here is the other human contribution, put up here merely for educative purposes, with full credit to Daniele Pantano:
The Young Maiden:
Often by the well at dusk,
You see her standing spellbound
Drawing water at dusk.
Buckets plunge up and down.
Here’s why Pantano gets the big bucks while I’m writing this on my blog at 2 am on a Monday morning. He preserves the rhymes from German to English, which is not always an easy task. Granted, he probably made more than one pass on his version, but it’s superior to mine at least in that regard.
He also selects the word “dusk” as opposed to “twilight” for whatever reason. Do the words have different connotations? I guess they do if they do for you. For me, though, the words produce similar images. I will say there are some things I prefer about my version to the official one. Pantano’s choice of “spellbound” makes this nameless maiden seem more hypnotized than magical. “Enchanted” leaves her some agency no matter what kind of spell she’s working under; in fact “enchanted” suggests that she possesses her own store of magic, rather than being yoked under someone else’s spell. The “bound” in “spellbound” makes the thralldom/shackling explicit.
Now, though, comes the scary part, as thus far we have only compared apples to apples, while now we compare apples to a silicon simulacrum of a shiny red Macintosh. Yes, without further ado, here is the machine translation. Copying and pasting this stanza in the Google Translate box, I can’t help but feeling a little afraid. Will poor Daniele Pantano end up sleeping under a bridge, having been usurped by the poetic equivalent of Deep Blue, the machine that spanked Grandmaster Garry Kasparov in chess?
Let’s find out.
The Young Maid:
Often at the well, when it's twilight,
One sees her standing enchanted
Drawing water when it's twilight.
Pails going up and down.
It’s curious that the machine chose “twilight” much as I had. It failed to preserve the rhyme between “bound” and “down” in the B and D lines, as well. Like Herr Pantano, the machine likes the verb “drawing,” while I’m somehow more comfortable with “ladling.”
Why, you ask?
I don’t need a reason, but if pressed for one, I would say that ladling has homier connotations. It is more gemütlich, as the Germans say, conjuring images of food, steaming soup scooped from a cauldron and splashed into wooden bowls for hungry children. It’s not just that, though. “Drawing” has the feel of utility about it. In the translation by the other human and the computer, the maiden’s at the well for prosaic reasons. She needs water (duh.) In mine, she’s idly killing time at a fountain, perhaps admiring her reflection as the bucket disturbs the water to send out rippling waves in mirrored shimmers.
You see, of course, another reason why I’m not a pro, why I’m maybe the one who deserves to be sitting under a bridge with a little cardboard sign...
The most important difference between my translation and those of the woman and the machine was staring me in the face from the very first line.
I had my maiden at a fountain, while theirs were at wells. How did I not lead with that obvious divergence between my version and theirs? Especially since the word “enchanted,” so close to “fountain,” would immediately suggest fairytale connotations, perhaps even hinting at the Fountain of Youth.
I’ll chalk it up to tiredness, though I’m sure you can supply less charitable speculations as to my negligence, oh notional and likely nonexistent reader.
Published on June 02, 2025 15:32
•
Tags:
ai, chess, literature, poetry, translation