Andrew Fish's Blog

October 26, 2013

Art for Art's Sake

I'm a big fan of theatre. Not the West End Lloyd-Webber musicals that people think a key part of the London experience, but plays, whether it's the comedies of writers like Noel Coward, Oscar Wilde or Ray Cooney, the comic dramas of Alan Ayckbourn and John Godber or classic works by writers like Shakespeare, Beckett or Sheridan. I've even written one play myself. I travel for theatre too: although we live within an hour's drive of more than two dozen venues, we will sometimes venture further afield for something we really want to see. After all, a couple of tickets for an amateur performance and an overnight stop at a budget hotel doesn't cost any more than two tickets at a regional theatre - and spending two nights in Liverpool to see Graham Linehan's adaptation of The Ladykillers was miles cheaper than venturing into London when it transferred to the West End - even without the accommodation costs that would have entailed.

When at the theatre, of course, we spend a little on drinks - and we make a point of buying a programme for our sizeable collection. Contrary to what I heard in one theatre a few weeks back, a programme isn't just paying a fiver for a load of adverts: the programme at the Stephen Joseph for our Ayckbourn day, for example, cost £3.50, covered all of the day's plays and was packed with articles about the writing, the biographies of the actors - all good stuff whilst you're waiting for the performance to begin.

Something that seems to be turning up in programmes a lot lately, however, is an advert for a campaign called "My Theatre Matters." Theatres up and down the country are telling us that government are threatening their funding. One theatre even claimed that ticket prices would have to more than double for them to break even if the funding were not retained. Now, set aside the fact that theatre has long had something of a left-wing bias and is therefore no friend of the current government; ignore the fact that almost any group which receives public money will fight tooth and claw to keep every penny regardless of whether or not it's needed. Ignore even that we're living through a tough readjustment where it's finally being realised that we can't, as a country, keep spending money we don't earn. Personally, I'm very uncomfortable with the idea that one of my pleasures might be subsidised for no better reason than having an effective pressure group, so just What is the case for public funding of theatre?

The classic case, often heard from devotees of venues like the Royal Opera House, Covent Garden is that it preserves and nurtures our culture for the benefit of the people, that it makes it accessible to the masses. Covent Garden, of course, spends a lot of its time on those bastions of British culture, Verdi and Wagner, and charges prices - subsidy or no subsidy - that ensure the masses get nowhere near. That's not that I'm saying there's something wrong with opera, but if you're going to play the British culture card it would have more credibility if you actually played British culture. There's more of a case to be made in the West End, perhaps, where Andrew Lloyd Webber is at least British (although a lot of people would question whether his work was culture) but I'm not sure that theatres charging £50-£75 a ticket are really struggling to cope without subsidy, much less that they can claim accessibility to justify public largesse.

So what if we look outside of London? Despite the prejudices of those within the M25 who think the rest of the country is a barren wasteland, the UK has a great deal of regional theatre and a great tradition of little theatres. From where I live in Nottinghamshire, I can reach more than two dozen theatres within an hour's drive - there are five in Nottingham alone. Regional theatre has a real claim to nurture talent. Theatres from Nottingham Arts to the Liverpool Everyman proudly display pictures of actors who rose from obscurity treading their boards. With the volume of talent emerging from these places, you'd think they'd be awash with money. But actors are a fickle bunch: offer them a choice between playing to packed houses across the UK or spending a few weeks in California making a Hollywood movie and most will leave before you can say tax exile. If they come back, more often than not it's as a piece of headline-grabbing casting in one of those West End plays at £75 a ticket. Regional theatre therefore creates these stage leviathans but receives little benefit beyond a signed photograph and with so much of their success being monetised away from these shores, the economic benefit of nurturing that talent is questionable. That's not to say that every artiste is quite so mercenary, but it's far from an uncommon trait.

The strongest argument in favour of publicly-funded theatre is that it draws money into an area. It's a reasonable surmise: there's no doubt I wouldn't spend half as much time in places like Hull, Liverpool or Derby if it weren't for their excellent theatres and I seriously doubt I'm unique in that regard. In my birth town of Chatham, Kent, the local people also recognise this and have spent years lobbying for the reopening of the long-abandoned Theatre Royal.

To explain: for various reasons, Chatham has long been in decline. After the dockyard closed in 1984, the local economy became hollowed out. Workers capable of earning reasonable salaries largely became commuters to London. Those who couldn't get a job which would justify the cost of commuting worked in the local service economy. But bad road planning and the opening of first Thurrock Lakeside and then Bluewater made Chatham's retail centres less attractive. The money earned by commuting locals was diverted out of area and the supply of jobs began to dry up. The local council hit on the idea of building a night-time economy, trying to get people to come into the town for the pubs and restaurants. But they wouldn't allow the restoration of the Theatre Royal. It couldn't, they argued, compete with the West End. The trouble is that if they don't at least try, then they have no hope at all. If you live out in the fringes of the conurbation, getting into the centre of Chatham is painful. You wouldn't do it just for a meal, but you might for a night out with theatre and a meal. Given no choice, people elect to go to London both to dine and be entertained, or even to Dartford, which seems to be able to run a major theatre despite being even closer to London than Chatham.

And that's not saying that the theatre necessarily needs to be subsidised. Much of the problem in Chatham is not that the council won't put money in, but that it won't grant permission. And if you look at that other great bastion of public entertainment, the cinema, that doesn't receive public subsidy at all.

Cinema, in fact, is an interesting comparison. As a building, its running costs are extremely similar. It is, after all, simply a stage with a box office and catering facilities. It doesn't have to pay performers night after night, but then it doesn't charge as much for its tickets as a typical regional theatre either. Films are also extremely expensive to make - the budgets of epics like Cleopatra or Titanic would dwarf the cost of a season of Alan Ayckbourn. A single big screen actor may be paid enough money to rebuild Chatham's Theatre Royal twice over - and it wasn't until the advent of home video in the late 1970s that costs could be recouped beyond the silver screens. Clearly, there must be something to be learned.

And I think some theatres are learning it. This year, the RSC is doing something unusual - a live link up. Whilst the stage in Stratford entertains an audience of over a thousand people with David Tennant's performance as Richard II, the production will be filmed and transmitted to theatres and cinemas across the country with tickets at those venues significantly cheaper than those in Stratford itself. It's something the National Theatre has been doing for a while, with some success. Why can't this be done for other productions? Ayckbourn's Arrivals and Departures will tour next year, for example, but will likely only go to a dozen venues. With modern technology, it could be made accessible to hundreds more for the cost of a cinema ticket. It would be a form of democratic subsidy.

Home video, too, might contain a key for theatre. Over the years I have seen hundreds of great theatrical productions, from the one off production of Noel Coward's Private Lives celebrating his centenary at the South Bank, to the excellent production of the Norman Conquests in Liverpool. Some, like Howard Bretton's incredible civil war play, 55 Days, will likely never be performed again. Why shouldn't I be able to give the theatre more money in exchange for a DVD or Blu-ray of the play? It's not as if it would stop me going to a future production: there is, for example, a complete BBC box set of Shakespeare's plays, but the RSC seems able to pack houses with new productions of those all the time. And stage adaptations of television programmes like Dad's Army, Inspector Morse and Columbo have played to appreciative audiences all over the country. People have an enduring fascination with seeing familiar faces in person, whether that's a stage production of Mrs Brown's Boys or Disney on Ice. An audience who bought the disc of David Tennant as Hamlet might pay good money to see Matt Smith regenerate his way into the same role. And even for those for whom the sofa is just too comfy, those who would buy the disc instead of going to see the play; even they will be putting money in to the production, much as they already do for stand-up comedians or musicians. Just ask Jimmy Carr's accountant what that means.

And theatre needs to do more to promote itself. Once a year, most theatres stage a pantomime. It's not my kind of thing, but these are so successful that over the course of a month they will make some theatres half their money. It's not simply that they're for kids or that they've got famous names off the telly, it's that they're well-promoted and people know what to expect. Andrew Lloyd-Webber has used television to his advantage too - the endless talent shows to find stars for revivals of Oliver and Sound of Music have kept his theatres busy for years. But people will enjoy more than just panto and musicals if they try. Theatre needs to shake off the idea that the rest of their offering is somehow less accessible, to grab that panto audience and show them that there's more to the stage than Widow Twankey. And the BBC could help: over the years, both the BBC and ITV have benefitted from theatre as a recruiting ground for new actors. At one point, both channels gave something back, broadcasting monthly or even weekly plays, showing audiences what theatre was all about. They should re-establish this tradition. BBC Four could even dust off the archives and show some of those older recordings, much as they did a couple of years ago with Ayckbourn's Seasons Greetings. If people could see how good theatre is, how it does something different to television, but with equal impact, they might actually go and see it.

In the end theatre isn't, or needn't be, an elite pursuit. Yes, there will be plays that challenge their audiences, but television does that too. Modern audiences can be brighter than people give them credit for. And television does it by cross-funding, using money generated by more accessible shows to fund the artier ones. Theatre should do the same, using all the trappings of modern technology to boost its audiences and open new revenue streams. Theatre does matter, but that doesn't mean it should sit on its laurels and pray for subsidy.
 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2013 03:58

October 6, 2013

Crossing the Line

Recently the internet has been alive with discussion about the next Superman film. Not simply because of casting or the other personnel decisions which seem so important to film obsessives, but because the next Superman film will also involve Batman - it will be what comic book types call a crossover.

Crossovers are a device which arose in comic books as a way to boost sales: Superman has long struggled to retain an audience and one way the writers attempted to address this was to find a story where he was pitted against Batman, the logic being that people's curiosity would boost sales, possibly extending beyond the audience either character commanded alone. Allegedly it worked, for a while, but like most such strategies it was little more than a short-term fix.

But on the big screen, the superhero movie is definitely in the ascendant. Under Disney's watchful eye, Marvel's stable of characters has been carefully nurtured, culminating in what must be the biggest superhero event movie of recent years - the Avengers. Seeing the success of the Avengers, Warner has clearly decided to use the same strategy to boost their own movie, bringing together Superman and Batman on the big screen for the first time.

And will it succeed? Not having followed the mythology of the comic books, I don't know. It's difficult to see how a film containing two such unevenly matched characters could work, but what's interesting is that it's considered possible at all. Because being able to make two series come together in one story is about vastly more than just having the right script.

What if, for example, it had happened in an earlier age? What if there had been an attempt to bring a Christopher Reeves style Superman into a Tim Burton directed Batman movie? With the comic book feel of 90's Batman and the somewhat more grounded feel of Superman, unevenly matched heroes would have been the least of the problems. The reason that it has become possible now is that when Christopher Nolan rebooted Batman, he took it away from the camp and colourful world it had come to inhabit and made it feel darker and more believable. It wasn't quite something that could happen in the real world, but it was less of a suspension of disbelief than other incarnations in recent memory. With the trilogy receiving critical acclaim, Superman was rebooted with the same sensibility - and with Nolan executive producing - making the two worlds close enough to mesh.

And it's not just about direction and design either - no matter how real-world Nolan's Batman was, you wouldn't expect him to turn up on Casualty. Films and television programmes operate in meta-worlds, defined by the liberties they take with reality and it is the credibility of bringing those meta-worlds together determines whether crossover is possible. You can see what happens when it goes wrong: not long before Doctor Who was originally cancelled, somebody decided to put a definitive nail in its coffin with a Children In Need Special. Shot using a primitive form of 3D which only worked with the camera in continual motion, using cheap computer graphics to replicate the faces of the early doctors, the show brought various incarnations of the Doctor into a crossover with the cast of Eastenders. It was appalling. Sometimes things just shouldn't be brought together - and I don't just mean the Doctor and John Nathan Turner.

Beyond television, despite the lack of look and feel issues, it's easy to see that books also have limitations in crossing over. Despite the current craze for mash-ups involving zombies and regency drama, one wouldn't expect Sherlock Holmes to turn up in Oz, or Arthur Dent to materialize on the deck of the HMS Bounty without jettisoning the values of at least one of the donor series, potentially both.

But sometimes it's more subtle than that. Ask why Alice, of Wonderland fame couldn't logically end up in Narnia and you can see that there's something not quite right with the mix, but it's not quite as obviously wrong as Doctor Who and Eastenders.

Which brings us round to my own writing. In the New Year I will be publishing a new historical comedy - of which more later. This time it's not a time-travel story, but is entirely fixed in its period - in this case the English Civil War. Whilst Erasmus and Bandwagon are clearly worlds apart, on the surface the new book could share the same universe. It would be easy to believe that there could be at least a cameo for our time-travelling schoolteacher.

But that's on the surface. On the surface, both books are rooted in real historical settings, although the first Erasmus uses legend rather than real history; both feature a supporting cast of comic historical types. Only one book involves time-travel, of course, but that's hardly a problem unless the new book makes it clear that time-travel is impossible. No, what makes the difference between the two is something almost subliminal - it's a sort of emotional framing. And oddly, what this means is that the book that involves time-travel is more real than the one that doesn't.

To explain: the new book - of which more another day - is broadly comic. The easiest way to describe it is a Wodehouse comedy set in the Seventeenth Century. The characters are drawn with a broad brush and would probably not have seemed out of place in a Restoration comedy. In Erasmus, meanwhile, whilst there are comic characters, the central characters are picked out less theatrically. Erasmus has a complex span of emotions intended to make the reader care about his predicament, rather than simply to laugh at it. That's not to say the new cast lack definition or complexity, but it's a different kind of complexity, rather like the difference between a sitcom character and a character in a drama. And what all that means is that the worlds are too different: Lord Galton - to whom we will return at a later point - would not benefit from meeting Erasmus. No matter how good those individual worlds are, they wouldn't mesh - and only the most forgiving of fans would appreciate an attempt to make them.

So does this mean I've made a mistake? Wouldn't it make more sense to imbue the new book with a similar sensibility to Erasmus to keep alive a commercial possibility for the future?

Some authors certainly would. Some authors would feel that everything should be pegged together to create one cohesive world, either because one part of that world is already a successful brand or because they feel that should they stress the relationship from the start it would make each book as likely to lift the others.

But in so doing, they would be sacrificing something. For me the point is that those meta-worlds, whether realised in the design, the nature of the humour or the characterisation, are not there simply to separate one canon from another or to create separate brands. They are there to give the canons somewhere to operate, something to support them and to make them satisfying in themselves.

It's true that some worlds are flexible - Oz in particular seemingly happy as dark and sinister or light and filled with song - but others work best in their own idiom. To return to the original inspiration, the reason that Batman and Robin was seen as a failure is that it went too far from what viewers thought the 90s films were about. It's not intrinsically a bad film, but the look didn't mesh with the humour and it jarred. Likewise, the later Superman films jettisoned much of the reality and sincerity that had made the original a success. Perhaps people were harsh, expecting more of the same when that was scarcely possible, but perhaps they were simply instinctively responding to a feeling that something wasn't quite right. Perhaps suspension of belief is like suspension of a bridge - it requires any number of strands to keep the thing up. Severing a few in the name of commercial consideration isn't going to please those crossing the river.

When it comes to my own work, meanwhile, I will continue to plough as many furrows as seems appropriate for the different seeds I wish to sow. When the new book - of which more anon - is published, I hope that readers will be attracted to the other books not because they want more of the same but because they want more of the same quality.
 •  0 comments  •  flag
Share on Twitter
Published on October 06, 2013 01:12

September 21, 2013

Subduction

There's a worrying trend afoot in the world of computer software. After a decade of unsuccessfully pushing the idea of thin client computing, "the cloud" has finally made the idea fashionable enough for both Microsoft and Adobe to do what they've been wanting to do for a long time - launch subscription model products, with Adobe in particular pushing for this to replace their conventional offerings. Pushing the advantages of continual upgrades and access from anywhere through the cloud, Adobe think the public are finally ready to buy in to the new model.

Why am I worried? Ask yourself this: why are Adobe doing it? What is it about subscription software that's so attractive to a publisher?

I've been using Microsoft Office, Word in particular, for the best part of twenty years. From my first copy of Word 2, right up to my current install of Office 2007, I've handed Microsoft a fair amount of money over the years. At the same time I've gone from Windows 3.1 and MS-DOS 6.2 to Windows 7. The two are not unrelated, because the truth is there are few features in Word 2007 that I use and weren't present in Word 2. I am, after all, only writing novels.

This is an issue for software developers. Because any product as it matures will struggle to find new features with which to encourage repeat sales. Sometimes software gets an incentive in the form of new technology, so CD burner software expanded its remit by taking on first DVD and then Blu-ray, video encoders went from AVI to MPEG-4 and so on. Office software, meanwhile, has had few such technological shifts to drive upgrades - there haven't been that many leaps since the development of the printing press - and this makes it difficult to persuade users to hand over more of their hard-earned cash.

Microsoft realised this problem some years ago. Their first answer was to create Microsoft Office - a software suite. The idea was simple, charge people more than they would pay for one product, but less than they'd pay for all the products in the suite and, as long as users see new features they want in one of the products, they'll probably upgrade the whole thing. By selling the notion of "integration" across the packages, you also discourage users from upgrading applications individually on an as needed basis. Word 6 won't integrate well with Excel 2003, that's just the way it is and even if users realise this is deliberate design there's nothing they can do about it. So, Office was born and Visual Studio and a whole host of other suites of related software. Microsoft hit pay-dirt and many a software producer followed suit.

The trouble is that this approach only takes you so far. Sooner or later the whole suite is going to be enough for most users or at least the perceived value of any new features will be less than the actual cost of the software. If you're only buying the suite for one or two packages, this moment will come even sooner. Some companies dealt with this by just adding more and more shovelware to their suites. Nero, for example, has expanded its offering from a simple disc burner to a veritable suite of multimedia apps, most of which you'll never use, but which have often proven enough to encourage people to upgrade.

Microsoft, of course, had another advantage. Windows. During the 1990's the update cycle for Windows became entrenched. Driven by advances in gaming technology, users demanded more and more powerful graphics with a seemingly endless array of new features. DirectX, a feature Microsoft introduced in Windows 95, provided the software needed for developers to target the widest range of graphics hardware. Microsoft were canny enough to make DirectX itself free, thus ensuring its wide acceptance in game development, but by periodically insisting that users needed the latest Windows to use the latest DirectX they managed to ensure that gamers spent at least some of their money on keeping the operating system up to date. This, in turn, encouraged hardware manufacturers to support the latest Windows, which meant they were less likely to offer support to older versions, meaning that users generally had to buy a new version of Windows when they bought new hardware. It was simplicity itself to ensure that changes in Windows also undermined older versions of Office - not every time, of course, that would be too obvious, but every so often there would be some change in architecture which meant that a humble word processor had to be replaced to remain usable. Users were therefore locked into a cycle of upgrading software they were perfectly happy with just to keep using it.

In the end, even this strategy began to run out of road. PC gaming is not what it once was and advances in graphics performance have slowed, making relatively old games look less prehistoric than their forebears did at the same age. DirectX updates have become less frequent and less immediately desirable. Windows itself is no longer a licence to print money, with hostility to new versions like Vista and Windows 8 no longer confined to sections of the geek community. So where did they go from here?

The answer appeared to come from one area of software which has managed to secure a steady revenue: anti-virus software. Originally, anti-virus apps sold as boxed products with a number of years of free updates to keep track with new viruses. At the end of that time you bought a new release of software with a new series of updates. The trouble is that users didn't. Some switched to competitors; others, having had few incidents in that time, decided they simply didn't need anti-virus software. There's only so many viruses that can "escape" from the labs to keep users paranoid enough to regularly dip into their wallets. So the anti-virus companies made a subtle shift. Instead of buying the software, you pay for the service. A regular payment means that not only does your virus database keep up to date, so does your software. People's natural laziness does the rest and the revenue stream is secured. Even reasonable price increases can be factored in without spooking people.

This model is, not unnaturally, attractive to companies like Microsoft and Adobe. After all, if you can't persuade people to pay for new software, why not get them to keep paying for the same software? Throw them a periodic bone and they'll be happy, surely? And the rise of subscription music services like Spotify has added weight to this argument. After all, if people are prepared to keep paying a subscription for something that never changes at all like a song, they must be prepared to do the same for software, right?

But Spotify reveals one of the fundamental problems with subscription. The service is notorious for changing the availability of content. Whilst this is largely the fault of rights holders, that doesn't mean an office suite would be immune. How often has a release of Windows withdrawn a feature people wanted or added one they didn't? When the start button disappeared in Windows 8, you could hear the howls across the internet. Imagine if that had happened automatically, silently vanishing one night after an automatic update, rather than simply being something annoying in a new release that dissuaded you from upgrading. If you aren't the same as what Microsoft perceives as the majority of users, subscription will mean constant niggles about whether your app will do tomorrow what it does today.

For me, however, the key problem with subscription is the one revealed by the motives of the software producers in introducing it. Can it really be moral to force people to pay time and time again for the same product just because you can't persuade them to buy a new one? If a double glazing company demanded you pay a monthly rental for your windows you would quite rightly report it as a protection racket. Why should software be different? Free space on the cloud might sound attractive, but is it really worth a subscription to Adobe to get it? And if a company tries to lure you with attractive pricing, ask yourself how they plan to make money. Will the price shoot up when the customers reach a critical mass or will they make their money another way? We've already seen free services like Facebook monetizing people's content or making data about it available to marketing companies. Anyone who values their privacy or makes a living by their writing, photography, music or movies should be very wary of any easily accessible online storage.

Finally, the fatal flaw in a subscription system is the risk of impermanence. Imagine you got rid of all your music and subscribed to a system like Spotify. Now, imagine in a few years time you had a financial crisis which forced you to cut back. Spotify being a luxury would have to be in the frame for disposal, but to cease payment would be to lose your entire music collection. Had you retained your CDs or digital downloads you would still have all the music you'd ever had but would simply have to restrain the expansion of your collection, but if you're paying a subscription for access you'd have nothing. The same problem could arise if Spotify themselves went bust. No competitor would feel obliged to take on their customer base and give them the same accrued benefits. After all, why should they?

For businesses, the problem is even more acute. If you're reliant on subscription software and your income contracts it could mean choosing between vital services like rent, gas and water or the software on which your business depends. Again, if you'd owned a boxed product you would be able to continue simply by not upgrading unnecessarily. If you're hooked into a subscription, however, you lose it all - which could mean the end of your business. And if the software companies manage to get everybody onto a subscription model there's nothing stopping them simply upping the prices to the point where some people can't afford them. After all, if the majority have no choice but to keep on paying, revenue will keep going up.

As a software engineer myself, I do have concerns about the future of the industry. The continual upgrade model was never going to last forever. A shift to subscription might sound superficially as if it's good for people like me, but the truth is that if you can persuade people to pay a regular tariff for software which changes relatively little, companies will come to the view they need fewer developers. In the same way the publishers of those cookbooks you get in bargain bookshops have managed to continually sell the same few hundred recipes just by changing the formats of the books or combinations within them, the software companies will simply rearrange their software periodically to look new. And marketing will persuade management that this can be done with more people like them and fewer of those long-haired scruffy types in the basement. After all, if nobody needs new features why pay developers to write them? Subscription models are bad for developers and bad for customers. The only way to avoid a world where we all have to give Microsoft a chunk of our monthly paycheque to keep the marketing people swanking around in suits is if enough of us resist now and send a clear signal that we won't be sucked in.
 •  0 comments  •  flag
Share on Twitter
Published on September 21, 2013 08:03

September 14, 2013

Lingua Franca

There is a popular perception that the English don't like other languages. Stereotypes abound of Englishmen speaking slowly in their native tongue as an alternative to speaking foreign, criticising the locals for not learning English. Look at English literature, however, and a different picture emerges: how often do you see the use of italics marking a phrase as a borrowing, or even an unitalicised word whose spelling marks it out as less than Anglo-Saxon? From raison d'etre to schadenfreude, they're everywhere you look. And it's not as if these borrowings are for concepts unavailable in English itself. Je n'sais quoi translates directly to "I don't know what" and whilst the literal translation may seem a little inelegant, a good writer could readily find a myriad other ways to express the same idea. So why the apparent inconsistency? Why should a people seemingly so averse to speaking in tongues write in so many pens?

English has long been a somewhat unusual language. The rules of its grammar are so intricate and fluid that English people rarely realise they exist and yet it is immediately apparent when a foreign visitor slips up and misspeaks. The spelling and pronunciation likewise seem fraught with elephant traps for the unwary, and woe betide someone who tries to form a simple plural of a word like radius or ox without due care and attention. It is almost as if the language is designed to perplex the visitor, and yet it is impossible to survive in the country without learning at least a reasonable smattering. Unlike in Paris, where simply attempting the language earns you the right to mix tongues according to your abilities, in London any such attempt is likely to meet with blank incomprehension.

The reasons for this complexity are many, from the geographical separation between England's key centres of learning (the older universities being responsible for the accepted spelling) and of political influence (London being responsible for most of the accepted pronounciation) to the linguistic pot pourri engendered by waves of invasion and later imperialist outreach. The influences on the language have been many more than you'd expect for a nation which as Churchill observed was not forever criss-crossed by foreign armies. But it's that last which is counter-intuitive: why should an island nation with a proud independent streak have such a porous language, whilst continental nations like Germany, whose very existence as a nation is still relatively recent, be so rigid by contrast? English has, after all, roots in the ancestral German tongue, but now boasts a vocabulary more than ten times the size.

Somewhat counter-intuitively, I believe it is our being an island nation, safely outside the milieu of cross-European wars, which provides the answer. Tokens of identity such as language matter vastly more when that identity is frequently at risk. So Germany, which was broadly outside the Roman Empire, resisted language concepts from the evolving romance languages as solidly as it had resisted the legions in the Teutoberg Wald. Likewise, when Rome fell the former nations of its Empire clung to their own tongue as a defining part of their civilized heritage. England, or rather Britain, was different: although also part of Rome's dominions, the conquest here was never quite as total and a strong sense of former culture and identity remained. Not for nothing did Tacitus put the words of the first great independence speech in the mouth of Caratacus. Even after the conquest, the British aristocratic classes were not mere Roman implants (Britain being an unpopular place to go when you're used to the warm climes of the Mediterranean) but Brits who romanised, following the fashions of Rome - including the language - but retaining their links with their origins.

When the Romans withdrew and were replaced by the Saxons, the British language was more or less extinguished from England, surviving in Wales and Cornwall, but the tendency to adopt new language was already ingrained. Latin remained part of the mix, despite Anglo-Saxon becoming the dominant. In fact, the use of Saxon was confined to vernacular language, with official and religious documents remaining Latin until hundreds of years later. This distinction between what was spoken and what was written was true across Europe, but in Germany Latin would have been an import, held apart and probably only spoken by an isolated religious immigrant class, whilst in countries like France, the similarity of Latin to the developing vernacular would have been less pronounced - much as was the case in Rome itself. This psychological splitting of the difference of a country partially romanised but now invaded by a culture without a Latin tradition meant the upper classes of English society were probably better linguists and less purist than their continental contemporaries.

Meanwhile, the process of the evolving vernacular continued. First, half of Britain was invaded by Vikings, a conquest eventually made briefly total under King Canute. They, in turn, were followed by the Normans. In both cases, the size of the invading class was relatively small, meaning that the fledgling English language was not swept aside, but rather grew, with countless new concepts jostling for space alongside words which would have started as synonyms but drifted apart over time. Whilst the Norman court initially resisted speaking the local tongue, aspiring Englishmen would have adopted the language of the invaders whilst retaining the tongue of their lessers. Conceptual separation between the language of court and country would have led to melding so that by the time the Angevin empire collapsed and left the ruling classes as purely English, the vernacular tongue was much more sophisticated than it had been only two centuries before. Monarchs from King John onward began to consider themselves as patriotic Englishmen, using and promoting their language as part of their identity, although it would take until the sixteenth century, when a reluctant Henry VIII authorised an English language bible for this transition to become complete.

So the Britain that emerged onto the world stage at the end of the Middle Ages was already a country with a rich language and a tradition of absorbing words and concepts. Writers like Shakespeare and Milton were also expanding the tongue with new coinages, often looking to classical languages like Latin and Greek for etymological inspiration. It was entirely expected that when Britain set on its imperial expansion it would continue, magpie-like, to take on words from across the world, from shampoo and bungalow to veldt and typhoon. At the same time the imperialists spread their tongue to their possessions. Indians who wanted to get ahead in the British Empire learned English, much as the English had learned French under the Normans. But the richness of the English tongue meant that this time there was no separate language of court and country, English grew to dominate particular classes entirely, leaving it as the greatest legacy of the empire. The different approach of nations like France and Spain to empire meant that the same process failed to occur in their spheres of influence: the Spanish, like the Saxons, wiped out much of the culture of their territories; the French, like the early Normans, remained aloof from it. While the English writers of the Enlightenment were continuing to borrow from Greek and Latin to find new words for gases and processes, the French were relying on an official academy to regulate the language and protect it from foreign pollution.

And so, in our modern times, English has become the most popular language in the world. Endlessly inventive and continually adapting, it defies all attempts to replace it as the language of commerce and popular culture. It is true that America and Hollywood in particular is partially responsible, but that's not the whole picture. When Hong Kong was ceded to China in 1999, it retained much of its British identity and English remained the language of banking. Indeed, English is spreading throughout China as it opens up to the world, and that has little to do with the movies they consume.

The English can guarantee going almost anywhere and being understood at least partially by some of the people, something which can't be said of other European peoples. For the English, therefore, it is hard to know which other language would be as useful as their own. Certainly no others are as expressive or as flexible, meaning even if an Englishman were fluent in another language he would be robbed of endless idioms, expressions and words that he takes for granted. For the English, therefore, there is no contradiction between borrowing from other languages and yet preferring not to speak them exclusively. It's an expression of both our cosmopolitan outlook and our island identity.
 •  2 comments  •  flag
Share on Twitter
Published on September 14, 2013 01:16

August 3, 2013

Listen to the Band: The Story of Bandwagon

In the unlikely event there ever is a legend surrounding my life as an author, Bandwagon will loom large in it. This, after all, is the book where it finally came together, where I was first able to construct a tight, continuous narrative and sustain it for a decent number of words. This year being the tenth anniversary of the book, I've decided to make the edited 2007 "Digital Remaster" freely available - hopefully people will enjoy it, but hopefully it will also help build the audience for Erasmus.

So where did Bandwagon come from? What inspired me to write what is essentially a musical comedy with science-fiction trimmings? The trigger, as is often the case with me, was a thought arising whilst watching a film, in this instance "That Thing You Do". For people unfamiliar, this was a pet project of Tom Hanks following the career of a fictional Sixties one-hit wonder band aptly called The Wonders. Plucked from musical obscurity by an ambitious impresario (played by Hanks), they make it briefly big with the film's title track.

There's a key scene in the film where the band perform their song for an important audience. The drummer, nervous and fired with adrenalin, plays the opening drum riff at an increased tempo, thus speeding the song up and inadvertently propelling it to become a hit. In so doing, however, he creates a tension with the band's songwriter and ultimately triggers the break-up of the band.

Two things struck me in that moment: the first was that it was an oblique reference to the Beatles' Please Please Me, a song which started out as a slow, Roy Orbison style number, but which the band sped up when producer George Martin rejected their first version. The other element - and this was the key - was that the speeding up of the drummer was something like a robot developing a fault. In that moment, both the referential and the science fiction nature of Bandwagon was born. And me being me it had to be a comedy.

Obviously, parodies of bands have been done before. Terry Pratchett brought music to Discworld in Soul Music, and Christopher Guest has created memorable musicians in both Spinal Tap and A Mighty Wind; Bandwagon needed to be different - and this required something more than funny characters and a silly story. The inspiration for that missing element was provided by a magazine article about David Crosby. The story as I remember it goes that back in the 1960's Crosby was frequently high on something more than life. In fact, he could be said to have something of a chemical dependency. One day, short of cash and desperate for a fix, he sold his car to a friend. The car was then stolen by joyriders who dumped the car, conveniently enough, outside Crosby's house. Not to look a gift horse in the mouth, Crosby sold the car to someone else for another fix. Shortly later and by a staggering coincidence it returned again.

I don't recall whether Crosby sold the car a third time, but what I do remember is the thought that the story was just the kind of thing you wouldn't believe if it weren't in a reputable magazine like Mojo. I think the column was actually called "Would I Lie to You?" - a reference to a Eurythmics song. Further, I realised that music history is full of these things, from the fake Fleetwood Mac which briefly operated in parallel to the real band, to Becker and Fagen's onstage antics in Jay and the Americans. Since I was steeped in the lore from years of music biographies and documentaries it seemed it would be relatively easy for me to reference these as part of a broader musical universe for the book, changing the names and cranking the insanity up to eleven for effect. It was also an ideal opportunity for me to write Douglas Adams style narrative gags, something I'd avoided in my previous books for fear of direct comparison.

Tone determined, I set to work, crafting a distinctive history for my band in their crazy universe. As the book began to take shape, I found myself finding other ways to layer in reference. Parodying song titles was obvious and made for some easy jokes, but the next leap came from the memory of an interview with Paul Simon, recorded by his biographer Patrick Humphries. In the interview, Simon talked about the genesis of his breakthrough "The Sound of Silence" and the way in which the lyrics have subsequently gained a meaning significantly greater to the one he intended. This immediately gave me an idea for another silly song title and a joke, which I stored up ready for the appropriate scene.

Of course, if you leave an idea floating around in the brain of a creative person, it tends to mutate. By the time I came to write the chapter in question, various of my neurons had got together and were now petitioning me to make a little more use of the notion. Presented by my subconscious with some of the lyrics and tune I was forced to put aside the computer, pick up a guitar and compose the whole thing. The result, "Listening to Nothing," would become Blood and Oil's signature song. More than this, however, it would open up another interesting vein in the novel, as I described the band's performance with reference to the song's actual arrangement (although their version is more keyboard oriented than mine, because Keys was already earmarked as the writer). This new kind of descriptive narrative - I still viewed the book very much as an exercise rather than a potential product - informed later scenes where I described the band playing onstage and referenced genuine songs in their performances. Finally, I wrote a number of set-piece scenes drawn from song lyrics - an early example of which takes place in the electrical store right at the beginning of the story. The result of all these different layers was a book which rather more densely referential than is usual, so much so that some readers spotted jokes I never even wrote, relating elements of Blood and Oil's story to tales they've heard about other musicians.

So far, so straightforward, but although I am a comic writer by virtue of an inability to take life too seriously, I am also a dramatic writer by virtue of an obsession with making things logically consistent and (in its own terms) real. Bandwagon is the book which taught me about blending these elements. During the writing I was occasionally concerned about the way a comic narrative gag would be followed by a tense or emotional scene with the band. I called this issue "density of humour" and at the time I saw it as a serious problem. However, I happened to catch an interview with Neil Gaiman where he talked about BBC executives confusion at the variation in tone in his seminal series Neverwhere, and I realised maybe this wasn't so much a problem as I thought. Reading back some years later, I can now see it wasn't. Indeed, it was a vital step in my development, so that by the time I reached the second Erasmus book I also mastered black humour - adding the dark matter to the joke itself.

Bandwagon was completed in early 2003, submitted to a grand total of six publishers and then - as my confidence waned - issued through a self-publishing service called Publish and be Damned. Self-publishing was pretty slow in those days, however, so by the time the book actually emerged the more accessible Erasmus Hobart and the Golden Arrow had already been accepted by its first publisher. Bandwagon was allowed to languish, unpromoted, on Amazon's then relatively obscure site. I suspect the only extant print copies are those I personally handed out to friends and family. The book sank into obscurity faster than an English entry to the Eurovision Song Contest.

Looking back from a decade on, I wouldn't say Bandwagon was perfect. There were definitely lessons learned in the writing and editing of Erasmus which advanced my craft further, and as a result, if I were writing Bandwagon now I'd do some things differently. But Bandwagon remains a book for which I have a soft spot. It's not just its place in my history, more that like an errant but charming friend you're bound to love it despite its faults. That's why I tweaked it just a little back in 2007 for the "Digital Remaster"; it's why it continues to provide inspiration for my songwriting and the sleeve notes for the albums I send friends and family each Christmas; it's why, when I was trying to rebuild my self-confidence as an author in 2012, it was a sequel to Bandwagon I chose to write. And it's why now, ten years after the original publication, I have chosen to fix a few typos in the Digital Remaster and release it gratis for a new audience. Take, enjoy, and pass it on - maybe if enough people like it I might get round to editing the sequel...

Bandwagon is available for free at Smashwords.
 •  0 comments  •  flag
Share on Twitter
Published on August 03, 2013 01:27

July 4, 2013

Independently Minded

Since today is Independence Day in the US, I thought it an interesting opportunity for a little historical musing.

A few years back, BBC Radio 4 used to broadcast a programme called "What if?" - a counter-factual look at the outcomes if history had taken a different track. It was a great idea, but unfortunately it was poorly executed in that it focused far too much on trivial detail and not enough on the bigger picture. So, for example, had it asked what if Hitler had won the Second World War it would have been more interested in discussing the look of a Nazi victory parade than it was in looking at the historical outcomes of such a change.

Looking at this broader aspect of counter-factual history is something of a fascination to me, because so often you find tiny events acting like pivots on which countries turn. And not just countries either: the fate of the world can be directed by events in the right country at the right time. Think Franz Ferdinand's assassination in Serbia.

But World War I was probably inevitable and Franz Ferdinand was merely the trigger, which makes that particular instance somewhat less interesting than others. For me, the most interesting is the English Civil War, an ideological conflict about the extents of monarchy which changed the world and without which Americans might not be celebrating today.

Far-fetched? Let me explain.

Part of the interest of the English Civil War for me is the melting pot of ideas it produced. Writers like Hobbes, Paine and Milton thrashed out views about the world as it was and how they thought it should be. Soldiers, taken far from home and placed in unfamiliar company, discussed political theory and early forms of socialism. Even Cromwell's Protectorate was a series of experiments in government. None of it worked - probably because it was too much of a shift to go from monarchy to democracy in one step - but it profoundly altered the relationship between Crown and State and set the path toward public participation that would be followed over the next two hundred years and more.

It is arguable that it also - perhaps ironically - preserved the British monarchy. Had Charles I triumphed over Parliament then the remainder of his reign would have been as an absolutist. His sons, likewise, were clearly cut from the same cloth. This means that over the next generation there would have been further uprisings, but there's little doubt these would have been brutally put down by a regime fearful of its own security. But repression only lasts so long: eventually one of these uprisings would have succeeded and because of the tyranny that preceded it, this time - as in the French Revolution - there would have been efforts made to ensure the displaced ruling class could not return to power. This time there would be no King over the water to restore the government after an interregnum. It's a pattern that is repeated throughout history, from Ancient Rome to the Arab Spring.

But the common factor in most of these uprisings is that they are more about factionalism than ideological dispute. The ousting of another Egyptian president this week is not based on the desire for a different system of government, merely the desire for a ruling class which represents a different portion of the electorate. Likewise Rome's Emperors may not have borne the name of King, but to the masses they appeared much the same. Cromwell's regime was different in that it attempted to redraw the system of government. It may have failed in its own time, but its ideas lasted such that it was not the Protectorate that was the temporary aberration, but the Restoration. After the Glorious Revolution of 1688, the Cromwellian settlement became a fact of life.

This had two effects: firstly it meant that the ideas of prominent republicans were not repressed and lost. Whilst those seen as tyrannicides were harshly dealt with, only the most dangerous radical thinkers were seen as a threat to order. Even some of these managed to cross the Atlantic to the colonies; there they found an audience already open to the republican ideas the wars had propagated. Secondly, the settlement gave people the impression - if not the fact - of representative government. Nobody in 1642 was calling for no taxation without representation, they were simply calling for Parliament to be the arbiter of taxation policy. It was Parliament's need to establish its legitimacy as the representative of the people in war that gave the impression it had to listen to the people in peace.

Had Charles triumphed, then, what would have been the result? Some firebrands would no doubt still have fled to the colonies, but had they then operated openly as agitants it's likely that loyalist repression would have destroyed many of them. There were, after all, many sympathisers for the old ways in America, and a paranoid state would have advanced these people to positions of power. And the idea that tax payments bought a voice in government would have rung hollow - after all, it wouldn't have been the case in England either. Had the Americans, for whatever reason, become antagonistic to tax, their protest would have taken a similar line to the original views in England, that the King was at fault and Parliament in the right - more or less the opposite position taken in the Eighteenth century. They would also have been in the same boat as the English, meaning the outcome of any uprising would have been less about independence and more about reform.

And it's far from certain that the dispute would even have arisen. The American issue came about through taxes on trade as well as issues of governance. England's success as a trading nation came directly from the Glorious Revolution: the alignment with the Dutch brought banking and a national debt, the looser government allowed free enterprise to flourish. Other countries did subsequently try to follow the model, but where the role of the monarch was stronger this was invariably less successful. It was also the combination of British naval power - something that was in decline in the 1640's, but was boosted by Cromwell and then by trading companies - and free enterprise which allowed Britain to acquire the world's largest empire. None of this would have happened had Charles remained on the throne. Without this, there would have been no British tea ships in Boston, the very triggers of the dispute would have been absent.

Finally, the last consideration is our relationship with the French. 1688 put an end to any chance of England being a Catholic country. The Stuarts had been at least sympathetic to Rome, possibly even converts to the faith, and there's no doubt had the Stuart line continued, our relationship with Catholic France would have been less aggressive. Our poorer success rate in colonising the rest of the world would also have put us less in competition with our neighbour, perhaps even resulting in an alliance against the Dutch. The result of this would be to redraw the battle-lines of the whole period, with no battle of Blenheim, no seizure of Canada - and no French support for any uprising in the colonies. Without the French, any American uprising would have been much shorter-lived. And, significantly, without the French bankrolling a revolution in America, Louis XVI would not have gone bankrupt and the French revolution may never have arisen. The Nineteenth and Twentieth centuries could have looked very different.

So, as you celebrate your independence today, perhaps it's an opportunity to reflect on the chains of consequence that have led to your present state. Although, of course, if this were an episode of "What If?" it would no doubt be far more interested in whether in an alternate history you'd still be letting off fireworks.
 •  0 comments  •  flag
Share on Twitter
Published on July 04, 2013 10:00

June 28, 2013

Slave to History

Recently I've been watching Downton Abbey. Somehow I'd managed not to catch any as it was broadcast, but a blu-ray boxset received as a Christmas gift finally had me installed in front of the television to see exactly what it was drawing endless tourists to visit Highclere Castle in Wiltshire in a way which television programmes rarely do.

The answer, of course, is that Downton is engaging, character-driven stuff, an Upstairs Downstairs for the 21st century. Indeed, its success prompted Auntie Beeb to attempt the rekindling of that venerable title, with markedly less success.

Part of the formula which makes Downton a success, however, is also the element which - as a writer - I feel weakens it. That is the determination to thread the characters' stories through historical events - as many as possible. The second series, for example, starts at the outbreak of World War I and ends with the Spanish Flu outbreak five years later. Whilst this provides the drama of confining a key character temporarily to a wheelchair and the emotional hook of the woman he felt compelled to marry dying and thus leaving him free - after a suitable period of morbid self-examination - to pursue the woman of his dreams, the rapidity of the historical backdrop leads to some weaknesses in the other story threads. So, for example, one episode ends with the discovery that Bates' spiteful wife has died in mysterious circumstances; the next, set some months later, quickly glosses over this point by having a character mention it in exposition only to confirm it was suicide. By the end of the episode Bates is arrested for murder, but when it finished I half-expected the next episode to commence some months later with his either having been exonerated or being well on the road to it.

Another episode opened with a mysterious war veteran claiming to be the long-lost heir to the family, only for him to have fled the scene forty-five minutes later, presumably because the author knew there was a big time-jump in the offing. A potentially interesting development was thus dispensed with as if it were little more than a tick-box in a historical checklist. This approach robs the programme of dramatic tension both by what they are forced to rush and by what they jump. Imagine if Dallas had followed the Who Shot JR moment with the next episode set after JR was fully recovered and his would be murderer already under arrest - it would rightly be regarded as a damp squib of the same order as when Bobby emerged from the shower and claimed the last few years had been only a dream. The third series of Downton has gone some way to improving this problem, with rather more continuity between episodes, but it seems to have done so by stripping out a great deal of the historical backdrop, leaving only a hint of the situation in Ireland.

So does this mean that you can only manage one or the other - historical or dramatic narrative - and that never the twain shall meet? Other shows have covered longer spans of history. Take, for example, The Tudors. To cover Henry VIII's reign from almost the coronation till his death in a little short of forty episodes clearly required a certain amount of compression, but it handled it without any sudden leaps or hanging plot threads. This is because the history itself is the heart of the story - all extraneous subplots have been subordinated to the history, meaning that the passage of time is sped up and slowed down to suit the narrative arcs chosen. But the Tudors, like the Borgias, has it easy in this regard: most of the characters and story-lines are real, meaning that narrative is served simply by choosing which to focus on at a given point in time.

Indeed, the show which best managed the marriage of fictional narrative and history is Rome. Telling the story first of the rise and fall of Julius Caesar, then of the rise of his adoptive heir Octavian (later Augustus Caesar) was always going to be rich in drama, but the threading through of the stories of Lucius Vorenus and Titus Pullo was what made the show great. Because of the show's early cancellation, Bruno Heller was compelled to accelerate the historical developments in the second season, but it is a testament to the skill of the writer that this was done without feeling disjointed. No episode started with characters expositing about the events unfilmed after the previous.

Rome has, of course, been criticised for not telling history exactly as it was. It wasn't like I Claudius, a dramatisation of history (or, more accurately, a dramatisation of Suetonius' sensationalised rendering of said history). But this wasn't because Heller didn't do his research properly - he made conscious decisions to tweak the history to suit his narrative. Historians may carp here and there, but the show's success spoke for itself and none of the tweaks were contentious outside of academic circles.

But maybe that's Downton's problem. Maybe, with the history being fresher, it is more publicly contentious to play with it. People would be far more likely to complain if you shortened the First World War than if you chopped a couple of months off of the First Punic War - if only because they'd be more likely to realise that's what you'd done. Including a mobile phone in 1924 would invariably lead to complaints from viewers or readers, but including a fork in the thirteenth century wouldn't, despite being every bit as anachronistic. Is the secret to historical drama more about distance?

Speaking as a writer of comic historical fiction, I have to say much of this is academic. When you're writing comedy, you don't have to be entirely a slave to history, as long as you don't make any glaring errors which are clearly not deliberate. So, if I want to replace an English king with a French interloper for the sake of a laugh, I can. If I want to move the Southampton plot's conclusion from Portchester to Conisborough, people won't take it too seriously. If I want Queen Elizabeth to form a rock band on lead electric banjo... well, it's a matter of tone whether that would work (I may come to that in a future article).

The point is that the pliability of history is entirely a matter of context. You can play faster and looser with the things people are unlikely to know than with the things they will have been taught in school; you can have more fun when you're being silly and you can exercise vastly more lassitude when the central characters are definitively fictional. On that basis for now I'll stick to my silly wanderings in ancient history and leave Julian Fellowes to the twentieth century - he seems to be doing fine with it.
 •  0 comments  •  flag
Share on Twitter
Published on June 28, 2013 10:03

May 22, 2013

Literary Supplement

A recent survey of teenage reading habits has suggested that young people are opting for easier reads, rather than "more challenging classics." Should we be concerned, or simply pleased that teenagers are reading at all?

There have probably been literary snobs for as long as there has been literature. Chaucer was no doubt denigrated for writing his crude poems in vernacular English rather than sticking to highbrow Latin; I suspect even the first scrapings in Coptic were denounced by some as dumbing down compared to the ideographic writings which went before. In more recent times, these attitudes have crystalised, upholding a selection of books to be venerated as classics, only occasionally allowing a new work to join the elite.

You might argue that books become classics because they stand the test of time, that they have some merit which keeps them relevant, but this would be fallacious. Older classics are regarded as classic simply because they were successful in their day or are all that survives of their era; more recent classics often gain their status because they were as unpopular and impenetrable when they were published as they seem now. What little value judgement is being made is generally based on pretension rather than insight.

But that's not to say classics should be disregarded, simply considered in their context. Consider two pre-twentieth century authors, Dickens and Scott. Both were hugely popular in their day, their works making their authors wealthy icons of their society, but they were successful for very different reasons. Dickens' books were, for the most part, deeply personal tales of the socially divided country he saw around him. Grinding poverty, injustice and death stalk his literary landscape like the ghost of London present. In many authors' hands these would have emerged as dank, depressing stories - like a nineteenth century Cathy Come Home. Dickens, however, was not a man given to dwell on the gloom. Instead, his books are shot through with passion and humour: larger than life Bumbles and Cheerybles, comic Fagins, charming Nancies and deeply satirical Circumlocution departments. His writing is filled not with despair but with joy. His heroes live in the hope that something will turn up. And frequently it does: villains meet with sticky ends; heroes win through - literary justice succeeds where the judiciary would fail. The optimism and the easy style made Dickens hugely popular. This, in turn, promoted the social issues which Dickens highlighted. Though many of the issues are no longer relevant to our age, Dickens' humanity ensures his books live on.

Scott, meanwhile, had an easier task. Rather than taking the grim slums of Scotland and producing a Rab C Nesbitt for the industrial age, he reached back into his nation's history, taking escapist tales of adventure and daring and packaging them for an audience who didn't want to reflect on the sorry state of the world around them. His heroes take the names and some of the actions of historical characters, but strip them of any moral ambiguity; his worlds are painted romantically rather than realistically.

It's an easy recipe for success. As with Harry Potter in our own era, escapism sells. As long as they aren't terribly written, such works succeed based on their novelty or the popularity of their subject matter. And to an industrial nation, tales of high adventure were both novel and popular. This meant that Scott didn't need to be as good a writer as Dickens, just not terrible. In comparison to the former's elegant and inventive prose, Scott's is functional, ironically coming across more like journalistic reportage than that of the former journalist. As a result, Scott's books have a far lower profile now than in his day, although the history he ransacked remains popular in films such as Rob Roy and Braveheart.

Why the comparison? Well, it matters because of the use of the phrase "more challenging classics." To what kind of challenge are these self-imposed guardians of literary quality referring? The challenge of extended vocabulary? Of contextual understanding? Of reading alternatives to received wisdom? Or simply of not putting down a bad book?

It is true that building vocabulary and understanding is a key point of reading for the young. It is equally true that many classics contain a broader vocabulary and intellectual foundation than, say, "Now We Are Ten." The trouble is that broader isn't necessarily better. After all, why would a classic have a more useful vocabulary than a contemporary adult book? The only way in which a classic would have more challenging vocabulary than, say, "A Brief History of Time" would be in terms of redundant terminology which has lapsed from the language. For anyone not pursuing a career in literature or history, there seems little advantage in this. And it's the same with the intellectual content: it may well be the case that reading Victor Hugo requires the reader to understand the relevance of tumbrils to Revolutionary France, it's less clear why this acquisition of knowledge is necessarily significant to the development of a modern teenager.

When it comes to being challenged in terms of viewpoint, modern society does much to prevent a reader going against the grain. Here the guardians of literary quality will run up against the guardians of moral values, frowning on Galton's eugenics or Ayn Rand's capitalism and banning Hitler's anti-Semitism. Anything old enough to be a classic in this vein is likely either to have become received wisdom, proscribed fallacy or even irrelevance. The idea that a modern reader would gain more by reading the arguments round the Corn Laws than more modern issues like gun control seems naive to say the least.

Which leaves bad books and our comparison between Dickens and Scott. To give merit to a reader for having the sheer bloody-mindedness to plough through the more turgid and self-indulgent works of pre-modern fiction and stick at it seems to have little or nothing to do with literacy or the development of an enquiring mind, which is presumably the reason we want to encourage our teenagers to read.

So, does all this mean I think there's not a problem? Not at all. Clearly, if our literacy rates are falling there is some kind of issue, but without more detailed statistics we can only guess at its nature or extent.

Here's a theory: I would suggest that with the advent of the Internet, mobile phones and other modern distractions, fewer young people are inclined to read books. Our surveys therefore are likely to reflect this, so if we were to factor out the young people who read infrequently what would that do to the literacy rate? Logically it would have to raise it, but it's far from clear whether the overall rate would then still show a decline in historic terms.

Then there's the question of how readers were surveyed. Were they asked about how often they read or simply what was the last book they read? If it was the last book a young person read, is it not possible that it demonstrated not that teenagers were reading below their mental age, but that they hadn't read for some time? If a child received a Noddy book for one birthday and an iPhone for the next, it might be that the Noddy book was the last thing they ever read.

There have, of course, always been distractions that keep some children from books. Whether it's been footballs, BMX bikes, games consoles or mobile phones, every generation has had a bugaboo that has caused despairing elders to decry the decline of civilization. What's different now is that the latest distractions are at least vaguely language-based. Whilst being out on a bike for all daylight hours hardly advances a child's reading age, reading sloppily-written Twitter feeds or the mangled English of an SMS is quite likely to retard it. Poor language and syntax becomes endemic, preventing children learning the constructs of language by example. At the same time, the banal content of these electronic sound-bites constrains the ability to express more complex ideas. Schools have always taught English in a rather erratic manner, biasing toward literary appreciation or language more at the whim of the teacher than through any kind of system. This means that there is little guidance as to what constitutes good language or grammar and those exposed to more electronic writing are bound to find themselves at some disadvantage. It's not exactly that their literary age is lower - I suspect few babies have LOL as their first "words" - but that it is built on the wrong foundations.

So what is to be done? The literary snobs would no doubt suggest that the banning of mobile phones or Twitter would provide the appropriate redress. I disagree. There's nothing wrong with children having their own "speak" as long as they realise that is exactly what it is. Teenagers of my generation managed to separate the hip-hop inspired language of the playground from the drier English of essays, so there's no reason why modern children couldn't do the same. The key is that they are exposed to enough well-written English that they can recognise it. This means that children need to be encouraged to find the time to read, which is partially about not drowning them in homework which serves no purpose other than to rob them of time, but mostly about getting them to enjoy reading. And the way you do that is to point them to books chosen not because the book police think they further some social end, but because they're engaging. The Harry Potter phenomenon proved that children don't just see books as a form of environmental vandalism committed by old people, so it's not an impossibility.

Which brings us back to our classics. If you're trying to encourage a life-long love of books in a young person, the principal selection criterion cannot be the alleged laudability of the work. Yes, Tolstoy may have much to tell us about Imperial Russia, but to a teenager struggling with his prose it seems more like time spent in Stalin's gulags. I remember a very good English teacher when I was at school: he introduced us to H G Wells and Edgar Allen Poe - works which excited teenage boys in their own day and still have that power now. The language was penetrable and relevant (apart from the odd puerile chuckle at the use of ejaculate for exclaim), the stories were exciting and as a result we were a well-read class, happy to bury ourselves in books during library lessons. To encourage children to read, they must pass the test of being enjoyable before assessing other merits. By all means have a discussion about the religious imagery in CS Lewis, but leave the challenging works on the shelf. If a child decides they want to read them, they'll still be there when they get older.
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2013 10:14

April 20, 2013

Look Back And Wonder

In the wake of the death of Margaret Thatcher I read an interesting piece by historian Dominic Sandbrook. In it, he proposed the theory that whether or not Margaret Thatcher had been Prime Minister, the changes attributed to her would still have happened. She was a product of her times and therefore if she hadn't become PM, some like-minded soul would have done so and followed the same path. Now, I don't want specifically to talk about recent history, but the point itself struck me as interesting. Since Sandbrook also compared the former Prime Minister's divisive capacity to that of Oliver Cromwell, I'll take that as my cue and look back to another time when one man appeared to rewrite Britain's destiny.

Historians are often divided about the impact of the seventeenth century civil wars on our history. Some see them merely as the final act in the religious struggle that followed Henry VIII's break with Rome, others as a defining point in the story of our parliamentary democracy. The Commonwealth can likewise be seen either as a brief interregnum or the birth of constitutional monarchy depending on your point of view.

Obviously, if you take the view that the civil wars were a closing act or a brief moment of madness, Cromwell's role is of necessity unimportant. If nothing changed, it really doesn't matter how much Cromwell was responsible for what happened. The Sandbrook of early modern history would end his piece here. To me, this would be a mistake - and not only if the writer was paid by the word. Because I take the view of Simon Schama, who makes the argument that the Glorious Revolution of 1688/89 was not only made possible by the earlier wars, it led directly to a political settlement based on Cromwell's constitutional experiments of the 1650's. And there's little doubt it was that settlement made the difference between Britain's restrained and retained monarchy and France's absolutist and abolished one.

So if the events matter, the question is one of whether Cromwell also mattered or whether history had any number of Cromwells ready to take on the reforming mantle. The popular myth is that Cromwell was a prominent parliamentarian with republican views who used the army in his quest to become essentially Britain's only president. If you watch Robert Harris' turn as the wart-faced puritan, you see him playing a key role in events from the very beginning, agitating against the King, almost praying for revolution. In that light it seems almost inevitable that the wars would happen and that Cromwell would end them as ruler of England.

The truth, of course, is somewhat different. Cromwell's own writings show him as a typical, if somewhat devout, country gent who turned to politics to make things better for his local area. In the days of the Long Parliament he barely features and it is rather Pym who leads the faction calling for a more accountable monarch. Charles resists any attempt to restrain his (as he sees it) god-given authority and when he eventually declares war by raising his standard at Nottingham, Cromwell duly sides with Parliament and raises a cavalry division from his native East Anglia. Subordinated to Sir Thomas Fairfax, he makes his mark in a string of victories which demonstrate a remarkable military mind. He is also instrumental in the professionalisation of the army, creating the basis for what becomes known as the New Model. By the time the first stage of the war is reaching its close, Cromwell has become such an asset to parliament's cause that when they pass the self-denying ordinance (by which terms no Member of Parliament can hold a commission in the army) special dispensation is given to ensure Cromwell retains his command. Throughout this period it is clear that Cromwell, as with most of the country, wants a settlement of the war which leaves the King on the throne. It is only when circumstances make Charles a prisoner of the army that Cromwell comes to realise that the King is duplicitous and impossible to trust. Even then, his first instinct is to believe the role of monarchy is more important than the role of an individual monarch and that the crown should go on. And it is still Fairfax more than Cromwell who instigates Pride's Purge and paves the way for the King's trial. It is only as Parliament struggles with the issues of a standing army, reluctant to disband without back pay and riddled with radical opinion, that Cromwell finally emerges as the key player. By the time the King's death warrant is signed, he is seen as a leading Parliamentarian. When Fairfax backs away from the conflict in Scotland due to his Presbyterian sympathies, Cromwell becomes finally the head of the army.

So many things in this chain of events could have been different. A less arrogant monarch would never have brought his country to a state of civil war; a more competent monarch would have won it. A less sensible parliament might have kept the army entirely under their control, or made the self-denying ordinance absolute. A more prosperous parliament might have been able to pay the soldiers' arrears and disband the army, leaving the country open to invasion from Scotland or - had the Thirty Years' War run a little shorter - from France. The King might have died in battle or never have been captured. And Cromwell (who also had a close call on the battlefield) might not have combined the mix of military and political skill which ensured both his victories and his rise to prominence. Even assuming that monarchy would inevitably have produced a tyrant sooner or later, it doesn't follow that Parliament would have ended that tyranny. Other countries have tried to topple autocratic regimes with hugely variable results. It's this roll of history's dice which makes the period so interesting.

Of course we know how the dice landed. The King was tried and executed and after Parliament tried to run the country in the interests of its members, Cromwell purged it and spent the best part of a decade trying various systems of government, putting down uprisings and entertaining a superstitious belief in the importance of September 3rd. When he died (on September 3rd), his son was unequal to the task of following in his father's footsteps and the monarchy was restored under Charles II.

What's interesting here is the flexibility of Cromwell's politics. Even before the execution of the King, he used the Putney Debates to allow the radical elements in the army to have a voice. That he ultimately steered clear of implementing the army's proposals was because of his small-c conservatism, a fear that to move too far from the status quo was to invite anarchy. Indeed, some of the proposals that emerged in those debates would still be regarded as radical now, including as they did such things as an end to land ownership. Cromwell's resistance was not, however, about preserving his own power: after the death of Charles he was content to allow the Rump Parliament to continue. It was only when it became clear that the Rump were using their position to line their own pockets that Cromwell seized power in what can only be described as a military coup. His first action was then to hand-pick an alternative parliament - the so-called parliament of saints - and only when that also proved less than saintly did he end up ruling personally. Even then, he saw this as merely a temporary expedient - he declined the offer to take the crown for himself. Such lack of ambition is not only out of keeping with the public perception of Cromwell, it is incredibly uncommon, a rare modern comparison being the 1948 uprising in Costa Rica, where the military assumed power and then abolished itself a year later. Cromwell did not, in the end, relinquish power, much less abolish the army, but this is because unlike Costa Rica he could not look to the rest of the world for a functioning model of democracy to copy. Cromwell was a pioneer in government and it can't be held against him that he failed. It certainly can't be said that Cromwell or no Cromwell, history would have been exactly the same - compare with the self-proclaimed Emperor Napoleon a little over a century later and you see what the combination of military success and revolution can produce.

So was Cromwell entirely out of his time? I think not and you only have to look across the Irish Sea to find a place where he was as much a slave to historical forces as anyone else of his day. Cromwell's actions in Ireland have earned him a particularly dark reputation amongst Irish Catholics, but to write him off as a religious bigot is to miss the point - this is the man who allowed the Jews back into England after four hundred years in exile. To understand Cromwell and the Catholics is to understand the bitterness of the religious divide in Christendom. There had, after all, been a hundred years of religious factionalism and fighting. Successive monarchs had taken one side or another in the schism, persecuting or even killing those who took the opposing view. As recently as 1641, the Irish Catholics had risen against the planted Ulster Scots and stories of atrocities had crossed the Irish Sea to England. No doubt there was much exaggeration in the telling, but stories of baby burning and of the rape and murder of Protestant women resonated with an already fixed view of the Catholic menace. At the same time, the ongoing fighting in England had made brutes of the army: in the early stages of the conflict, the civil war was largely civil - there are stories of gentlemen's agreements, of troops being allowed to leave sieges with their heads held high and weapons in hand. The continual resurgence of the conflict, however, had bred an aggressive war-weariness, with later battles such as Preston turning into slaughters in the hope such carnage would bring the war to an end. When Cromwell came to Ireland, therefore, there was little prospect of quarter being given: the nature of the enemy and the dehumanising effects of a long war made the outcome inevitable, regardless of who was leading the army. And quarter wasn't a requirement in the articles of war: whilst the modern reader may balk at the idea of slaughtering an entire garrison, what Cromwell did was accepted practice in its day. In fact, we lionize Wellington for his war against Napoleon, despite his pursuing a very similar policy against quarter in French-occupied Spain. In Ireland, Cromwell is a convenient bogeyman for a period of oppression which actually dated back to the reign of Elizabeth I. And this is because Cromwell is easy to caricature as a villain: imagine how people would feel if it were Gloriana herself whom the Irish vilified for their troubles. In times to come, I suspect Margaret Thatcher will also retain that villain's visage, for whilst a more nuanced history would suggest conflict with the unions was an ongoing issue and that the collapse of the coal and steel industries was inevitable, she remains a convenient scapegoat for one side in a particular societal division.

In the end, however, what both Cromwell and Thatcher demonstrate is that history isn't entirely inevitable. Sometimes parts of it hinge on the presence of the right person at the right time, whether it's a military genius with a lack of personal ambition, or a stubborn grocer's daughter with a belief in individual empowerment. Had Cromwell not been there in the seventeenth century it is likely that the outcome would have been a stronger, more absolute monarchy - much as was the case in contemporary France. Would that ultimately have fallen? Possibly, but it's hard to see what would have triggered it: the French revolution was triggered by the country's financial collapse after taking sides in the American War of Independence. This, in turn, was influenced by the revolutionary thinking which drove the English Civil War. Had the English Civil War not happened, or had it been a triumph for monarchy it's hard to say what course history might have taken. And if Cromwell had been less small c conservative? Would a more radical Britain have produced a stable democracy without a monarch?

For 1980's Britain, too, we can see that a weaker Prime Minister would have allowed the 1980's to continue much in the vein of the 1970's, with governments continually toppled by unions, the weight of wage demands driving inflation to ever-greater heights and Britain's export trade continuing its collapse. Like Greece, we might have fudged our way into the Euro to paper over the cracks, but like Greece we would have been crippled when the next financial storm hit. Because, deregulation or no deregulation, there would have been some kind of financial storm - there always is. Perhaps a less combative leader might have found another way through, but the unions were determined to finish the government from the day it took office, so this seems unlikely. Her predecessor Callaghan was of the left and the unions didn't do him any favours either.

Sometimes history really does hinge on a single character. They may be a product of their times, with their prejudices forged in the events of their time, but their unique personal characteristics mean they end up making a difference. It is for history - after sufficient time for the prejudices to die down - to judge whether that was the right difference.
1 like ·   •  0 comments  •  flag
Share on Twitter
Published on April 20, 2013 10:26

April 14, 2013

Robin of Where, You Say?

A recent article on the BBC news site had the county of Kent laying claim to the Robin Hood legend. As a man of Kent myself (not a Kentish man - that's the other side of the Medway) should I be pleased with this? Or should I be worried that Erasmus Hobart and the Golden Arrow is on historically shaky foundations?

Of course there always has been some to and fro over the genesis of Robin Hood. For years, Nottinghamshire's claim to the hero has been disputed by neighbouring South Yorkshire, with Barnsdale Forest near Doncaster taking the role of Sherwood. Why does it matter? To be honest, most of it is just friendly rivalry, but at its heart is a potential tourist goldmine. Sherwood Forest attracts thousands of visitors every year and the new Robin Hood attraction planned for the fields outside Edwinstowe will no doubt increase those numbers. If Yorkshire or Kent could make good their claims they could, they think, have those tourist pounds for themselves.

The trouble is the idea is fundamentally flawed and I don't just mean the idea that Robin Hood came from Kent (which I'll come back to in a moment). When it comes to tourism, history is in eye of the viewer. It's why Doune Castle attributes ninety percent of its visitor numbers to the fact it was used as Camelot (and nearly every other castle) in Monty Python and the Holy Grail. It's why, when Robin Hood: Prince of Thieves was a hit at the box office, Old Wardour Castle - the historically anachronistic home of Robin in the film - experienced a boom in visitor numbers.

But Robin Hood is more than just a film - it's a legend. So, whilst tourist numbers will ebb and flow as silver screen interpretations bring him in and out of focus, there is always a background level of visitors. Some of these are people who are making a once in a lifetime trip to the UK and want to visit the outlaw's home as part of the tour, others are more regular visitors - American Robin Hood societies and suchlike. For these people what matters is the associations of the legend, the vein which runs through every interpretation of the story, from the big screen matinee idol versions of the golden age of cinema, through to popular television interpretations of more recent years. And Hollywood has long accepted the idea of Sherwood as Robin's home. Sometimes that Sherwood will be a back-lot in America, sometimes a sparse East European woodland, but it's always Sherwood. Which means that is where the tourists go. And unlike Arthur's Camelot, which has an uncertain location, Sherwood is a fact of geography. So when the tourists decide to visit Sherwood, they know - more or less - exactly where it is. They might occasionally end up at Sherwood Pines, no doubt wondering which of Robin's men inspired the "Go ape" treetop obstacle course, but they will at least reach Nottinghamshire.

Could an alternative truth take hold? Could a popular film which had Robin of Aylesford pitched against the evil Sheriff of Maidstone change perceptions enough to rewrite the legend and reroute the tourists? Let's consider the lemming. Up until relatively recently, most people had never heard of lemmings. Then, in 1958, Disney made a documentary called "White Wilderness" in which lemmings were seen to throw themselves from cliffs. It was faked - the filmmakers actually threw the lemmings off - but the discovery of this didn't change the perception of suicidal lemmings any more than it changed the idea that Disney were a family friendly company. After all, which family hasn't used a turntable to throw helpless rodents to the deaths for fun? The point is that ideas have a certain traction: once they find a place in the public consciousness they tend to hang on there, regardless of what comes after. The Robin Hood legend has evolved over the years (the connection with the crusades being a relatively new addition, for example) but anything which actually flies in the face of the kernel of the legend is seen as counter-factual. So, for example, a recent series of books which reset the stories as part of the fallout from the Norman Conquest was accepted as interesting rather than as overturning what went before. It's conceivable that if nobody told a Robin Hood story for a couple of centuries that you could effectively reset the legend, but that's about as likely as two decades going by without someone doing another series of Batman films. Popular characters are good box office and it's when they remain true to the key points of their precursors that they do best.

So what of the veracity of the claims themselves? If we can agree they won't make a blind bit of difference to the public perception, could they at least be historically valid? Traditionally, the Robin Hood legend has been set during the tail end of King Richard's reign in the thirteenth century. At this time, Richard's brother John was in effective charge of the country. Whilst Richard's wars in the Holy Land were draining the country's coffers, much of the blame for this fell on John, who was widely seen to be using the situation to line his own pockets using place-men like the Sheriff of Nottingham to enforce his tyranny. The Kent claim places him slightly later, when John had assumed Richard's throne and the country was invaded by the French Prince Louis. The theory is that Robin was a resistance fighter against Louis and would therefore have been seen as an outlaw to the invader.

This is, of course, somewhat tenuous, not least because it would place Robin on the side of King John. Louis' invasion is a matter of record - in fact some historians believe there is a credible argument that Louis was, for six months, King Louis I of England - but the Robin Hood of the original tales was fighting against an oppressive state, not against an invading army. For an invader to become an effective oppressor in six months would, for the thirteenth century, be something of a record. In fact, it would require there to be little or no resistance to the invasion amongst the ruling classes, which can only happen if - like William the Conqueror - Louis replaced the aristocracy, or if - like William of Orange - he was actually used by the aristocracy to stage a coup. Either case would have resulted in a successful invasion and a longer reign - which isn't what happened. A Robin Hood who was fighting against the local Sheriff would therefore not have seen Louis as the cause of his problems - he might even have seen an invader as a solution to them.

And there's a lot of difference between an outlaw and what in later years would be called a partisan. Outlaws weren't uncommon in Mediaeval England: noblemen traditionally passed their entire estates to their eldest sons and sent their second sons to the priesthood. This left any further offspring as disinherited and landless, which led many of them to live outside the law, robbing and killing to support themselves. Some of these would have become mercenary soldiers, but this again would have been for profit rather than from some sense of patriotism. Some versions of the Robin Hood legend explicitly refer to him as the son of a nobleman, which would certainly be in keeping with the historical outlaw.

That's not to say the original legend isn't without problems, of course. Robin Hood is said to have robbed from the rich to give to the poor. This robbery almost always takes the form of currency. But prior to the Black Death, currency was very limited: serfdom meant people worked the land of their masters in return for a plot of land they could work for themselves. They weren't paid at all, which means that oppression could only take the form of having their own food confiscated (which sometimes happened in times of war) or of being expected to work longer on the master's land to the neglect of their own. Serfs weren't cash poor as we'd understand it - they had little or no money at all. They certainly weren't subject to taxation. There were rent-paying freemen, who would have been taxed on the value of their land, but these were in the minority (estimated at less than 10% of the population). A serf turning up with a handful of money to spend would, therefore, have been highly suspicious. They would probably have been arrested - automatically assumed to be the recipient of the proceeds of crime.

The Black Death changed all this. Suddenly, some feudal lords found themselves without peasants to work their land. Offering serfs from other parts of the country a promotion to freeman was a way of trying to address this shortfall. At the same time, the tax take fell and when the administration of Richard II attempted to raise land taxes and restrict labour rights to maintain their income they triggered the Peasants' Revolt of 1381, in which a man of Kent, Wat Tyler, played a prominent role. This makes the fourteenth century a much more credible setting for a Robin Hood character - in Nottingham or Kent.

But that's not where the legend originates: according to Wikipedia, the first references to Robin and Marion are dated to about 1280 and are found, not in Kent, but in France. The story doesn't mention any robbing of the rich to give to the poor, but is a simple chivalric tale of a noble man rescuing his lady. whether the story originated in France or crossed the Channel from England in the same way that Arthurian tales did we don't know - literary tradition at the time was full of people stealing stories and characters from each other - but Robin endured and the character eventually became rooted in thirteenth century Nottingham. No doubt, as the character became popular, his legend grew - minstrels taking other stories and attributing them to Robin to please their audiences. Some might also have grafted a little contemporary politics into the tales - criticising the tax policies of their own times by telling stories of a man who dared to stand up against arbitrary government in the past. By setting the stories in the reign of an earlier king already judged by history to be a tyrant the canny minstrel would have been securing themselves against charges of treason or sedition. Shakespeare took out a similar insurance policy in writing a study of tyranny using the fictional King Lear.

As the legend grew, Robin changed. He went from being a romantic hero to the archetypal folk hero. Chauceresque satires of authority accreted round the central story, giving it a richness which made it a credible shorthand for Merrie England and an attractive target for novelists and film-makers right up to the present day.

So was there a Robin Hood? Personally, I don't know but I doubt it. There may have been someone called Robin Hood from whom the original romance arose, but it is more likely that the name was whipped up by an author when they wanted to write a thirteenth-century bodkin ripper. Certainly the character is a composite, which means that any historical original would bear no resemblance to our perception of him. For the purposes of Erasmus Hobart and the Golden Arrow, however, there had to be a Robin and he had to be where the legends placed him. A book where Erasmus went back and didn't find Robin would have been at best misleading and at worst incredibly dull. As a writer of comic fiction, therefore, I feel entirely justified in writing something which may not be historically accurate. I certainly don't feel any need to reset the story in Kent (although it's amusing to note that Erasmus' school is actually a Kentish one reset in Nottinghamshire).

As to the security of the legend itself... Well, Hollywood has already made its decision. They might not know where Nottingham is, how its locals speak or what thirteenth century architecture actually looked like, but that doesn't matter. Like horns on Viking helmets, once something becomes the accepted filmic vision, that is what tends to inform the wider world. That is why, despite the fact there hasn't been a Robin Hood with a green feathered cap since Disney's version, they still sell them to the tourists at the Sherwood Forest visitor centre. And that's why foreign tourists will continue to come to Nottinghamshire whatever the BBC claims.
 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2013 01:18

Andrew Fish's Blog

Andrew Fish
Andrew Fish isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Andrew Fish's blog with rss.