The Rise Of The LitBots

Liam O’Brien entertainingly surveys the history of computer-generated literature:


Vonnegut made up a computer that wrote love poems in 1950 – and the Brits did the same thing, but IRL. The reason why you don’t have a bunch of computer-written books on your shelf is because they’re traditionally looked on as novelties. A few years back a Russian computer wrote a Tolstoy homage in the style of Murakami, but unless McSweeney’s has hired this program on the sly, this is the first and last we’ve heard of it. And this isn’t new – over thirty years ago, a program called “Racter” allegedly composed an entire book called The Policeman’s Beard Is Half Constructed. A decade later, another programmer and his creation composed a apparently-not-bad Jacqueline Susann knockoff. Seven years later, someone managed to create the automated equivalent of a tiresome MFA student.


Zooming out, O’Brien suggests contemporary novelists have little to fear:



[A]lgorithms are fairly good at making and collating content, but not literature. The Associated Press and Forbes uses bots to author articles; Penguin Random House doesn’t have the same option. (Though I do have a very convincing theory that James Michener was in fact a clockwork automaton.) Which brings us to the story of Philip Parker, who created a program that’s effectively allowed him to “write” over 100,000 books – granted, they’re books that nobody would ever buy, esoteric (and expensive) market research and industry study titles like The 2007-2012 World Outlook for Wood Toilet Seats. The program is a content compiler rather than a composer – though Parker claims to be able to write poetry and fiction with it – and Parker has posed it as a crucial element in getting textbooks and other types of educational content to poor areas, all because it cuts out the author.


But in a review of Peter Swirski’s From Literature to Biterature, Jennifer Howard notes that not everyone is confident that humans have a definite literary advantage:


Inspired in part by the work of Stanisław Lem, Swirski analyses the prospects for “computhors” as he calls these imagined but (he believes) soon-to-be-real machine entities. His focus zigzags across the fields of artificial intelligence, computing history, cognitive science, narrative theory, the evolution of men and machines, and post-Turing attempts to figure out how to identify computer intelligence if (Swirski would say when) it arises. “Underlying my explorations is the premise that, at a certain point in the already foreseeable future, computers will be able to create works of literature in and of themselves”, he writes.


The trick will be recognizing that we have arrived at that point: “There will never be a moment of epiphany, a discontinuous rupture, a sudden awakening” – no “equivalent of a burning bush”, Swirski writes. It might not even matter whether humans will be able to recognize true autonomous intelligence in a machine. More important is whether we are ready to believe it’s possible. … The “computhors” themselves may well not care whether we fully appreciate what they create, Swirski speculates. They’ll be too busy doing their own thing.




 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2014 06:24
No comments have been added yet.


Andrew Sullivan's Blog

Andrew Sullivan
Andrew Sullivan isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Andrew Sullivan's blog with rss.