Rick Wayne's Blog, page 97

July 7, 2016

There’s No Writer Better Than You

There’s no objective ranking of writers, despite that it subjectively feels as if there is. I’ve probably had three conversations in the last nine months or so about this. I have strong feelings about it, to the point that I’ve probably come across way too strong. But it’s true.


We all like what we like, of course, and don’t like what we don’t, and sometimes we even have reasons for that. I often pick on Neal Stephenson, not that he would give a shit, because I really don’t enjoy his writing and the people who do REALLY do, and I enjoy poking them. He lumbers. There’s action in his books, of course, but reading his plots is like watching a slow-motion recording of baseball pitcher in windup. It’s boring. Plus, it makes me think he’s the kind of guy who’s so proud of the delicious strike he’s about to deliver that he wants to make sure you see, in laborious detail, just how amazing he is. Blech


Those readers blown away by the end result of his pitches will feel that four hundred pages of windup was totally worth it. Me, I got better things to do and plenty of other things to read, and at this point there’s pretty much nothing anyone can say that would make me want to try another of his lumbering tomes.


So what, you ask? Just because I don’t like his stories, you say, doesn’t mean I can’t appreciate his writing ability. And that’s true. I can, especially versus someone like Stephenie Meyer. But then, as popular as Stephenson is, the Twilight series has pretty much blown him away, commercially. It even spawned a whole second series that ALSO blew him away — 50 Shades started its career as Twilight fan fiction.


I know people who enjoyed both, and none of them ever said the writing was good. But the books delivered something folks enjoyed all the same. LOTS of folks. Millions and millions. So who’s the “better” writer? Any criteria we choose will inevitably be a reflection of our own tastes. People who enjoy Stephenson’s bloated sacs will say they’re clearly better than Meyer’s ramshackle, and vice versa.


The truth is, we don’t expect the same things from different books. What I want when I open a pulpy thriller is different than when I open a gay erotic romance, or the latest Hugo Award winner. Fiction, as a mode of expression, is as diverse as the people who make it.


I’m actually not saying everything is relative, although I’m sure it sounds like it — just that what we mean, the measure we are ranking, when we say someone is “better” than someone else isn’t consistent. It’s a mental hodgepodge of factors. And even if we were to construct an average of everyone’s hodgepodge and use that to rank writers, it would still be practically meaningless because no one is the average case. The resulting ranking would be different than yours, certainly raising some folks you hated and lowering others you loved, and so would have no practical benefit to anyone, except as a show in itself, a drama: reality TV, clickbait, one big beauty pageant for bored suburbanites to argue about. Double blech.


But on top of its practical uselessness, such ranking would be objective bullshit as well because it would omit most of the fiction written. This point is a little harder to grasp because it requires a knowledge of stats, so I’ll start with an example. Imagine a diagnostic test for a disease with an incidence in the population of 1/10,000. This test is 99% accurate, and you get a positive result. What is the likelihood you actually have the disease?


About 1%. Even though the test is 99% accurate, the low base incidence means relatively few people have the disease, and so many more do NOT. A 1% error rate, then — which seems really good — still means there are, in absolute terms, vastly more false positives than true positives, and so the odds you are a false positive is still pretty darn high. (Which is why you should always go to the doctor.)


The fiction market is not a random sample of all fiction, or all writers. It both directs and is directed by publishers, readers, and the market. Tastes fluctuate and only ever exist in individual subjective units called people. To illustrate this, I often use the example of Frank Herbert’s Dune, widely regarded by those subjective units as one of the best science fiction novels of the 20th century. Nevertheless, it was rejected 20-some times before it was published. Harry Potter was rejected a few times as well. Most people want to believe that given enough trials, the “objective quality” of a manuscript, like those, will eventually be recognized: by the agent, by the publisher, by the retailers, by book reviewers and thought-leaders in the industry, and finally by the reading public, who exist at the tail end of that chain.


But that just isn’t the case. Even if each of those actors had a very modest “error rate,” the cumulative effect would be that false negatives, in absolute number, would vastly outnumber true negatives. In other words, awesome books have been written in the last century that you’ve simply never seen — so many, in fact, that they could fill your reading hours for the rest of your life, if we could only unearth them. The situation has improved somewhat since Amazon created a market for independent authors, but that’s simply revealed a signal-to-noise problem (which was always there but which the publishers, as gatekeepers, artificially obscured with their subjective selection process).


The public beast is fickle, and a giant, and as it prances about, it raises one creator to the sky while trampling five more. Such is life — unfair, and uncaring in its unfairness. Objectively, there is no objective ranking of writers. None. Zero. And that means no one is “better” than you, although you will certainly (subjectively) experience that to be true, as I do. Unfortunately, it also means, objectively, you aren’t deserving of an audience — or indeed deserving of anything — no matter what you write.


And since there is no “proof” as powerful (or as meaningless) as a single salient example, I submit the case of Timothy Dexter. Born in 1748 to a poor working family in the British colony of Massachusetts, Dexter took to the fields around age 8 and so never received much of an education. By chance, he married a wealthy widow, whose friends liked to make fun of him for being a “plain-spoken man” — which is to say, a wide-mouthed asshole.


Tim had opinions on everything, usually bad, and the wealthy socialites he encountered through his marriage, being narrow-mouthed assholes, teased him with terrible business advice. They told him to ship coal to Newcastle, for example, which was a major coal producer. So he did. It arrived right about the time there was a coal miner’s strike, and he sold all his coal at a profit. They told him to ship winter gloves to the Caribbean islands. So he did, where he sold them at a profit to Asian sailors on their way to Siberia. He sold bed-warmers as molasses ladles, Bibles to missionaries, and invested foolishly in a near-worthless Continental currency — which expanded his fortune when, against everyone’s best predictions, the colonists won the Revolutionary War.


Dexter built a mansion, which later became a hotel, and filled the grounds with statues of great men of the age: George Washington, Napoleon Bonaparte, and himself. But try as he might, he could never win the respect of the upper class, who found his arrogance and vulgar opinions distasteful. At one point, Dexter had faked his own death just to see who would show up at his funeral, and when he noted his wife wasn’t crying, revealed the hoax and took a cane to her.


We care because at the age of 50, after amassing a fortune, Timothy Dexter decided he would write a book. About himself.


A Pickle for the Knowing Ones or Plain Truth in a Homespun Dress ran to a mere 8,000-and -some words, a short story by modern standards, and was nought but one long gripe: about the politicians, about the clergy, and about his wife (without whose fortune he would have stayed a very poor man). But the little book’s most notable quality, that for which it is remembered, was that through all 33,864 letters, there was not a single punctuation mark. No periods. No commas. Nada. And only irregular capitalization.


At first, Dexter paid for printing and handed the book out for free, just like any indie author today. But following the same ridiculous luck he’d experienced his entire life, the damned thing took off, and Timothy Dexter’s tiny monstrosity ran through eight separate printings, despite widespread criticism from the literati that the lack of punctuation was a farce. Dexter responded in the second edition by including an extra page of nothing but punctuation — 13 lines of it, in fact — along with instructions that his readers should “peper and solt” the text as they pleased.


In all the world, my friends, there is no writer “better” than you.


 •  0 comments  •  flag
Share on Twitter
Published on July 07, 2016 10:32

June 22, 2016

From the Author of Dreadnought & Shuttle: Writer, Know Thyself

Today I am happy to host a guest piece by a friend and colleague, LJ Cohen, whose Young Adult space adventures have earned her quite a bit of attention, including a review in Publisher’s Weekly and more recently a membership in Science Fiction Writers of America. We’re both wrestling with growth and change lately, but I’ll her get to that…


—————————


A few weeks ago, Rick posted an essay talking about his own strengths and weaknesses as a writer. It’s  gotten me thinking more systematically about my own. Now that I have a body of work that includes six completed novels, three partial novels, more than a thousand blog posts, hundreds of poems, and dozens of short stories, it’s something I can critically examine.


Strengths


Feedback:


There is some common belief that one’s first million words are for practice. And there is something to skill building through repetition and practice. Certainly, I am a better writer today than I was a year ago, or ten years before that. But it’s not simply practice that makes perfect. After all, it’s possible to practice one’s own errors until they become dangerously habitual.


But if one ties that practice into feedback – especially thoughtful and focused feedback – then practice can turn into skill.


I definitely think one of my strengths as a writer is in my openness to feedback. Taken to an extreme, it can lead someone to attempt to ‘write by committee’ which is yet another way to fail. But learning how to hear, critically assess, and integrate feedback is an essential part of the life of the artist.


Discipline:


When I was a child, my mother would get frustrated with my apparent lack of discipline. It was true that I had very little in the way of study skills or time management until I was well into my college years and perhaps even partway into graduate school. She always wanted me to work harder. Maybe she was right, but I also know that even then, I could throw myself fully into a task I wanted to understand and become fully absorbed.


Now that I am a writer, I either finish what I start, or I make a conscious decision that a project isn’t worth pursuing and stop. I think both require a certain amount of discipline. Certainly as I grow in my craft, I know far sooner if a project has ‘legs’. And once I do commit to one, I work on it diligently until it’s complete. I can work from initial idea to complete first draft within 6 months for a full length novel. It’s faster than some, not as fast as others, but I know what it takes to get from one end of the project to the other reliably.


Character and dialogue:


I’m listing them together here because I think they work together in my novels. Particularly in my ensemble cast novels, I have been told that each of my characters have distinct voices. That readers don’t get confused when I switch point of view and they particularly enjoy how I can show and expand characterization through using multiple points of view.


What I have learned is that dialogue and interaction in general is critical to establishing character, creating tension and conflict, and advancing plot. These things don’t easily happen in a vacuum and reading page after page after page of exposition or internal thoughts can lead to stagnation in a story.


I love to throw characters in situations and with one another and see what happens.


Weaknesses


Distractability:


I struggle with keeping my focus on one thing at a time. I know there are plenty of techniques to encourage single-tasking from meditation to locking out the internet router during writing times. Trust me, I have tried them all.


My own tendency to avoid is also related to this. If I hit a plot snag or the writing gets difficult, my natural inclination is to check my email, or spend time noodling on social media. Before know it, I’ve been reading my twitter stream for 2 hours and haven’t written a word.


It’s something that effects my life at a systems level and I’m still working on solutions.


Extreme linearity:


I am a fairly linear writer. I start at the beginning and move from one scene to the next to the next pretty much in linear order. I’m not one of those writers who can easily jump forward and back through a manuscript to write scenes out of order. I’ve tried, but nothing I write this way actually makes sense by the time I get to that place in the novel. Once I have written a draft, I can move scenes and storylines if they need to be moved, but before it’s written, it has to be in chronological order.


What this can mean is weeks of staring at a blinking cursor because a current scene or storyline isn’t working and I don’t yet know why. But I can’t move past it until I do know. It’s not writer’s block – because that’s not a construct that has any utility for me – but story block.


It can also lead to a relative predictability of plot if I’m not careful. I’ve worked hard to make sure there are enough ‘breadcrumbs’ in the early part of the narrative to support whatever comes later. I can definitely add that layer of foreshadowing AFTER I’ve completed the linear narrative.


Floating Heads in Black Boxes:


Visual description was something I didn’t even know I struggled with until early readers pointed it out. I would get comments that I wasn’t ‘grounding’ readers in the narrative, that my dialogue read as if it were spoken by floating heads in black boxes. Over the years, I learned to add visual details to orient the reader.


It wasn’t until this year that I understood I have aphantasia. That is, I lack a functioning ‘mind’s eye.’ I actually don’t visualize at all. I never even realized that seeing pictures in one’s mind was something people did. Seriously thought the mind’s eye was metaphorical.


When I read books, I find myself skimming over all the deeply descriptive parts (sorry writers!) unless the description is integral to something else – character, action, emotion, etc. But once we’ve established where in a forest? I get it. Trees. Great. Move on.


So it’s no wonder that I didn’t add many visual cues in my work. Nor did my characters notice their visual environments very thoroughly.


Now that I know most folks DO like to know what’s happening visually, I can layer it in, but that kind of description will never come easily or naturally to me.


I’m sure there are other areas of relative strength and relative weakness in my writing, but these are the ones that feel most relevant currently. I’ll continue to keep aware of them as I move forward in my work. If you’ve read my work and have anything to add on either side of the equation, I’d love to hear it.



COMMENT TO WIN A FREE BOOK


Share your thoughts below and you will automatically be entered in a drawing to win your choice of any of LJ’s books.


Dreadnought And Shuttle is the latest in the Halcyone Space series:


When a materials science student gets kidnapped, she’s drawn into a conflict between the young crew of a sentient spaceship, a weapons smuggling ring, and a Commonwealth-wide conspiracy and must escape before her usefulness as a hostage expires.


index


About the Author:


LJ Cohen is a novelist, poet, blogger, ceramics artist, and relentless optimist. After almost twenty-five years as a physical therapist, LJ now uses her anatomical knowledge and myriad clinical skills to injure characters in her science fiction and fantasy novels. She lives in the Boston area with her family, two dogs, and the occasional international student. DREADNOUGHT AND SHUTTLE (book 3 of the SF/Space Opera series Halcyone Space), is her sixth novel. LJ is a member of SFWA, Broad Universe, and the Independent Publishers of New England.


LJ muses on life and writing on her blog, and can be found on Goodreads and at ljcohen.net.


 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2016 09:15

June 14, 2016

Kick Ass Moms

There’s a scene in just about every old sheriff-vs-outlaw Western where the hero has to leave his wife and child — usually the town schoolmarm, respectable and nurturing, and a son, the next generation hero who must learn a valuable lesson from his father’s sacrifice — to go face certain death in the performance of his duty, the pacification of the American West as an allegory of 20th century expansionism.


Similarly, there’s a scene in just about every gritty cop movie where the noble loose cannon — you know, the “straight talker” who just can’t understand the difference between rules and red tape — must call his (now ex-) wife and ask her to tell his daughter how much he loves her, and that he’s going to face certain death finally doing the right thing, taking down the criminal conspiracy, or thwarting the plot to kill the President, so that she can live in a better world, and maybe learn to respect him again.


We know these characters. They fill books and movies the world over.


Now I want to try a thought experiment. Take those exact same roles, and all the others like them, only make it a woman. Keep everything else the same, including the children. In other words, make her a mother.


We’re seeing a wonderful surge of butt-kicking fictional ladies these days. Secret agents. Superheroes. Starship captains. But ask yourself, how many of them are also mothers? Not in a future timeline. Not in the past as part of their dramatic origin story, where perhaps they lost a child. But right now — kids at home while they’re out nearly dying in space battles with the alien invaders. Our reaction isn’t the same. It’s a bigger deal somehow, probably because women are still seen as primary (even necessary) caretakers, so their duties as parents take primacy.


Which makes fathers more or less expendable, the president pro temp of the family. We can easily root for a “rogue cop” with a drinking problem and a penchant for violence because at least someone responsible is looking after his kids. But a woman whose ex-husband took custody of their children because she’s too busy car-chasing drug dealers to remember to pick them up from school, because she curses and drinks and is excessively violent, because she dropped a suspect down an open elevator shaft, that elicits a different reaction. Once a woman reaches motherhood, our expectations change.


That’s not to say there aren’t plenty of mothers in fiction, or that they don’t ever find themselves in mortal danger. They do, it’s just unintentional — the plucky ER doctor/single mom gets pulled into a web of international intrigue after saving a mysterious (and handsome) secret agent. Unexpected danger isn’t bad. It’s good suspense! But once it’s over, she needs to go right back to those kids (preferably with the secret agent in tow).


Similarly, mom can be an interstellar smuggler, always on the run from gangsters and the Imperial police, as long as life is tough in that fictional universe and that’s the only real option she has of fulfilling her primary duty: providing for her children. That’s an extension of the threat-to-family trope, a danger by proxy.


Fictional fathers experience this as well, of course, as we saw in Marvel’s Ant-Man. It’s not that parenthood shouldn’t change anything. Of course it should! Children are a fantastic, terrible, glorious responsibility, fictional or not. Rather, it’s that men don’t seem to share the same limitations. If Rey from Star Wars: The Force Awaken left a child back on Jakku, we would feel differently about her running off to become a Jedi, but as soon as Scott Lang’s daughter is safe, he is free to join Captain American and (at least theoretically) risk his life being a hero in Civil War. Same for Hawkeye, who acknowledges his role as father by retiring (after almost dying), but who we expect to come out of retirement as soon as the action demands it. And while every other major character in the MCU, male and female alike, is portrayed as childless, Tony, Thor, and Steve are all free to change that at any time. We learned in Age of Ultron, however, that Black Widow — for the longest time, the lone woman — was forcibly sterilized precisely because that’s the only way we’ll accept a female character kicking butt forever.


This is why sequels, when they happen, almost always involve a revenge kidnapping of the kids or the husband or something. There has to be legitimate reason for the main character as a mother to go risk her life a second time. In the case of butt-kicking Sarah Connor from the Terminator franchise, it’s because SkyNet has targeted her child, which was the whole reason Arnold was trying to kill her in the first place: because she was the mother of the real hero. (She even delivers a tirade on the mystical power of motherhood in Judgment Day.) Putting the family in peril in the sequel also serves the secondary purpose of demonstrating, both to the character and therefore to the audience, that being a full-time international super-spy, or pirate captain, or whatever, is just not a good job for a mother. So at the end of the story we’ll see her hug her kids and promise never to leave again.


I like to switch things up in my books, especially with gender. This doesn’t always work in my favor. My first novel, for example, played with the gender of the POV characters in deliberately provocative ways and earned me a few reprobations. But I believe art should challenge, at least a little, so I don’t plan on stopping. (But I learned you can’t get so far ahead of your audience that they can no longer even see you on the horizon.)


The character Xana from my superheroey sci-fi thriller THE MINUS FACTION is the mother of a small boy, and at the beginning of the story, that role is both central to her self-image and her primary motivation. But as I started writing the later episodes, I became increasingly uncomfortable leaving it there. Even if I could demonstrate emotional development in other ways, I didn’t want her to simply pull a Connor, or a Ripley: to kick ass — rather than retreat to safety, as any sensible person would as soon as circumstances allowed — because her child was being threatened. (In the case of Newt from Aliens, it’s symbolic. Ripley, hugging the child close in that iconic scene with the flame thrower, is clearly supposed to function as surrogate mother. Thus, to free her up to kick butt in Alien 3, the little girl has to die.) That’s easy. There’s no challenge there, either to me or the reader. After all, we expect — even demand — mothers do exactly that, a fact I acknowledge several times in the series when I describe Xana as going into “mother bear mode.”


I wanted the challenge case. I wanted to show a woman, a mother, legitimately make the choice the classic hero makes — the lone sheriff in the Western, the wounded officer in the war drama, the broken cop in the crime thriller, the space pirate in the sci-fi adventure, the superhero in the big name franchise — to go and fight, probably to die, because there was more at stake than just her family, because she felt it was the right thing to do.


The difficulty was how to pull this off in a way that didn’t make her seem either too callous or too weak. Stories can lead their audience, but they can’t get too far ahead! People still have different expectations of men and women, mothers and fathers, and I have to acknowledge that. If Xana is flippant with her decision (as a man might be), she’ll seem cold-hearted and readers will have difficulty connecting: a betrayal of her role as mother. But, if she wrings her hands too much, if she frets (or pouts), she’ll seem indecisive, passive, and weak: a betrayal of her role as hero. I have to stay between those two rails while at the same time answering for the reader the passive question that would otherwise arise: What happens to the kid if she dies? And has she given adequate thought to that?


My solution, inelegant as it is, will appear in Episode Six of THE MINUS FACTION, the conclusion of the series, out later this summer.


Xana temp


 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2016 11:30

June 10, 2016

Author Strengths & Weaknesses Survey Results

[If you want to skip the discussion of survey mechanics and go right to the results, scroll down to the section with that label.]


In my corporate career, I was a survey professional designing auxiliary sample frames, and subsequently hybrid selection methods, to help compensate for differential non-response. I realize there’s a lot of jargon in that sentence, but that’s the fastest way to say it, and anyway it’s not the point. The point is, these results need major caveats. And I’m going to throw in a nice, juicy rant just for good measure.


First of all, it’s important to note that these numbers don’t MEAN anything in some absolute sense. Nothing. Zip. Zilch. This is what’s called an opt-in survey, which means the only people who filled it out are those who were interested in it — sort of like an ESPN poll or Cosmo online question.


To understand why those are bunk, you have to understand a little about survey non-response. If you’re taking a survey of cancer patients — for example, to determine the efficacy of various treatments — and you send out a survey, the people who respond will disproportionately be those in better health. (If you’re in the hospital dying of cancer, you tend not to want to fill out mail surveys from strangers.) The results of your survey, then, will be biased. If you don’t compensate for that in your methodology — by weighting the responses, by oversampling the less responsive group, by using differential incentives, and so on — you’re going to draw erroneous conclusions about cancer patients with potentially disastrous results, especially if your client is a policymaker.


Non-response bias isn’t the only challenge facing survey research professionals — if your sampling frame (basically, the big list of names or addresses or whatever that you draw from) is missing a chunk of your target population, or contains numerous phantom duplicates (such as individuals with multiple personal phone numbers), you’ll get biased results from differential probability of selection — but non-response is the big one.


The problem in my case is that my target population is opaque and I couldn’t design a sample frame with good coverage even if it were clear. This is because my target population is something ephemeral like “All English-speaking genre-fiction readers aged 18+ who have a reasonable probability of enjoying books similar to mine.” (I’m not writing for everyone. No one is. That’s stupid.)


This is not a unique challenge of course. In fact, most marketers have to resort to various proxies — some combination of known characteristics: age, gender, location, education, income, race/ethnicity, language spoken in the home, number of televisions, presence of pets, etc. — to get at the bulk of people likely to buy their product. But the real world is messy, and sometimes there isn’t great overlap between the whole of your customer base and the proxies available to you.


This occasionally gets marketers into hot water when, for example, they market their product to one gender at the expense of the other and the torch-and-pitchfork crowd objects. People, especially those on the internet, like to think marketers target specific demographics because marketers are evil and sexist. Thing is, I’m sure some are, but usually the reason is because they know something you don’t: the sales figures. If 85% of your sales come from white college-educated males aged 18-54, and you have a limited budget, then regardless of how you personally would like the world to be, who are you realistically gonna target with those limited dollars?


It’s like asking a fisherman to spend his day covering the entire lake equally, to be fair, even though he knows for a fact that most of the fish are in one nook on the north side, and even then only in the mornings, and oh by the way, there are almost none in the middle, and so covering the whole lake, or even any part but the north nook in the mornings, is a giant waste of time and money. Reasonable people fish when and where the fish are. That doesn’t necessarily make them evil — although, again, some of them might be (for other reasons).


Recall in my example I said the marketers know that 85% of their customer base shares certain characteristics, and that the real world is messy. That last 15% — obviously these are just hypothetical numbers — will contain a hodgepodge of different groups, none of whom were specifically marketed to. If you are in one of those groups, you might assume there are a whole lot of people out there like you and that therefore those marketers are just being silly, or willfully ignorant, or sexist, or whatever. And again, they might be. Such things happen. My only point is that it’s a logical fallacy to extrapolate from your experience — a sample size of one — to the world at large, and that marketers are generally greedy people, and they have lots of data, and if the data suggested there was a large untapped market — i.e., money to be made — they usually go after it.


But then, it’s usually not clear what people will buy, and so sometimes they play it unnecessarily safe. After all, if there’s one thing you learn in the survey business, and especially in the smaller industry of public opinion polling, it’s that there’s a BIG difference between what people say they want and what they actually DO. People like there to be fair and equal consumer product options for all races and genders because that’s nice and it makes the world seem fair and such things are free to them. (They’re someone else’s problem to create and fund.) But consumers only see what is or isn’t on the shelf whereas marketers know what we actually and regularly support with dollars.


You see this behavior everywhere. In politics, for example, citizens report time and time again — going all the way back to the advent of scientific polling in the 1930s — that they want to “throw the bastards out.” And yet, in practice they overwhelmingly vote for the local incumbent (or they don’t vote). Every. Damn. Election. They like their elected official just fine. It’s all those other voters who are the problem!


This is not just me bitching by the way. This is an actual, measurable fact of the world. AND it’s me bitching.


Side Note: What people ESPECIALLY don’t like, more even than the status quo, is when you point out their occult hypocrisies — such as how, at any point that actually matters, they tend to actively support the status quo. That knocks fact and self-image out of alignment, which in turn creates cognitive dissonance, or at least it would if they didn’t compensate by attacking the messenger (they just don’t understand how the world works, you see) rather than addressing the discrepancy. But that’s another story.


To date, I haven’t been able to identify which granular proxies best fit my target audience — What corollary media do they consume, such as what types of music? What websites do they frequent? Are they gamblers? What products do they buy in addition to my books, such as lube and adult diapers? — but I know the global ones. They’re English-speakers, of course, with slightly more women than men, I’d guess, since women make up something like 65-70% of the fiction market. I don’t write Young Adult, so they’re mostly going to be persons 18+ — not that some teenagers wouldn’t like my books, but again, given limited resources, you gotta fish where you reasonably expect the bulk of the fish are. Most of them probably own some kind of e-reading device since those folks tend to be heavier readers. Those folks also tend to care less if a book is traditionally versus indie-published. And so on and so forth.


But even after all that, I’m still left with a big — and therefore expensive — market to target, and so for this study I kept to my little pond and ignored the big lake, let alone the giant ocean that is the paid survey market.


Discussion of Results

Survey Results 2016.06


An N of 33 is about the bare minimum for making any conclusions, no matter how slight. Samples of 30-35 are about where your margin of error slips into the acceptable range, assuming “average” variance. (A highly variable population of study requires a higher sample size to achieve the same level of confidence.) In short, there were enough responses here to validate a top-line analysis, but ONLY a top-line analysis.


The actual ratings mean nothing because, following my example above, I’m missing 100% of the “less healthy cancer patients” — those who don’t know or are on the fence about my books. The people who responded to this are those who already like and engage with my work. Thus, that the results are generally positive is both predictable and meaningless. It’s like saying “the people who like my books like my books.” Well, duh.


As I mentioned, the value of this survey is not the absolute numbers but their comparison, first between the categories measured, but also over time. I can, for example, repeat this exact survey at a later date and see if there is any significant change as my audience (hopefully) grows — what we call a longitudinal measure. I could (and probably will) add demographic markers in the future so that I can track responses by gender or age or average length of dildos purchased. But for now, I’m limited to top-line comparisons.


Respondents rated me highest on Originality followed by Action and then Plot. Premise, Characterization, and Pace were all basically tied for fourth place. Several of those are related, which shouldn’t be surprising (that’s co-linearity). All six of those together pretty much cover everything I emphasize in my writing since they are also what I enjoy reading. So I appear to be hitting what I’m aiming at.


At the other end, respondents rated me lowest in Emotional Development (my ex would agree!) followed by Descriptions and Setting. The latter was the subject of a blog post earlier this year after I had a small epiphany about setting, so all other things being equal, I would expect that to improve over time based on some recent changes I introduced.


As for the other two, my general emphasis on action, plot, and faster pace limits how much time I can realistically devote to expanded descriptions and characters’ deep emotional development, a point noted by several respondents in the open-ended comments section. I am not writing literary fiction, nor do I want to. However, that doesn’t mean those aren’t growth areas for me. Of course they are.


What I found most interesting, though, was not that those two categories were lowest — I expected that — but that the relative difference between them and my strengths was comparatively slight, and less than I expected: a low-to-high range of 3.82-4.48.


Recall, because of the nature of this survey, those numbers don’t mean anything in the absolute (even though I framed the question as if they did). Also, since all the respondents are already engaged with my work, the high-to-low range is artificially compressed. But still, what this says to me is that “the people who presently like my books don’t see a huge gap between my strengths and my weaknesses.”


It very well could be that, if more of the “less healthy cancer patients” responded, that gap would grow. That seems likely. (However, it’s also possible the gap would remain narrow but the whole range would slide down.) But if we assume a basic honesty among the respondents, then at the very least I probably don’t have a single catastrophic problem, which is significant — because it was always possible I could have.


That conclusion is not proved, given the limits of this survey, but it seems likely, especially since it’s completely corroborated by the Overall measure, where every respondent, despite whatever else they flagged as needing improvement, still put their total reader experience exclusively in the top two categories, suggesting none of my weaknesses are so bad as to inhibit their enjoyment of the books. Great!


A quick note about Grammar/Spelling/Mechanics: That’s less a measure of me than of Karen, my editor. I included it because it’s part of the reader experience — a big part — and because I can control the outcome — by getting a new editor, for example. Not surprisingly, she earns high marks, so no action needed. (I would have been shocked if the results were different.)


Final Summary: Nothing proved — the response pattern and sample size limit all but the highest-level suppositions. However, there were no surprises. I have room to grow in those areas outside my traditional emphasis, one of which was already being addressed, but there is probably no catastrophic deficit. Both time and further analysis will tell. Overall, a passable first measure.


That’s it! And by the way, if anyone needs help designing their survey, I’m happy to answer questions.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2016 08:00

May 30, 2016

Captain America & the Privilege of Compromise

The movieplex near my house employs a “voice of the theater,” who gives a series of announcements (in a pleasing tenor) before each feature. It being a holiday weekend, the Voice played a segment of the Gettysburg Address as I sat with my family waiting for Captain America: Civil War to start. I’ll admit, I felt a little stirring in the breast. And why shouldn’t I? President Lincoln’s words were both moving and timeless.


This speech in the picture, given by Steve to Spider-man in the comic but placed in Agent Carter’s mouth (by proxy) in the film, falls far short of that memorable eulogy, in my opinion. It does, however, have quite the following in some comic circles. Lately, I’ve been seeing a pretty strong reaction against it, as happens anytime anything becomes popular. It seems we hate nothing so much as what others love.


Cap’s stance is, we’re told, dogmatically intransigent. Unrealistic. Self-righteous. Even draconian. The thing is, it certainly can be.


But tell that to Rosa Parks.


Tell that to the unnamed Chinese martyr who stopped a column of tanks in Tienanmen Square by — yes — simply refusing to move.


Cap’s speech IS intransigent. Decidedly so. And most of the time, most of us should heed no such advice. Compromise is often necessary, particularly in a multicultural society. It is not a dirty word.


But compromise is also not a virtue. It’s a privilege. Or rather it’s born of privilege. The powerless cannot compromise; they have no stake with which to barter. Princes compromise. Burghers compromise. If you’re an educated, well-employed white person living in the democratic West, you’re probably so steeped in “compromise” that your skin wilts like leaves left in the teapot overnight! It’s what my grandfather called eating crow, and sometimes there’s so much of it that it’s easy to think the world is after YOU.


I hope it continues for you just so. I hope you never have to face a situation, as we did in the war, where there was no compromise. For that is when your privilege has vanished and your real virtues are tested.


 •  0 comments  •  flag
Share on Twitter
Published on May 30, 2016 07:00

May 24, 2016

Your Brain is Not a Computer

This is a wonderful, lucid, and short essay on the fundamental flaw of contemporary cognitive science written by a preeminent psychologist.


The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

Philosophy of mind, particularly human judgment and decision-making, is a big interest of mine. I’ve said repeatedly, as the author does, that there’s a reason 60 years of computational theory has produced no significant advances. It’s bunk, a textbook case of the trap of paradigmatic thinking.


I’d just as soon you go read the article, but if you’re short on time, here are a couple excerpts specifically relevant to my comments:


“By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.


Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer…


…Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the Information Processing metaphor. They couldn’t do it , and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.”


Before Newton, otherwise very intelligent people literally couldn’t imagine planetary motion without physical spheres. That was the only example they had. And it allowed them to do things like build functional models, orreries, that gave them the subjective sense they were on the right track, that they were making progress accounting for planetary motion — by adding spheres onto spheres, for example. In such an environment, it was easy to think they were only a few small revelations away from accounting for everything, when in fact they were almost completely wrong and the real explanation was much simpler.


The computational theory of mind is similarly pervasive, so much so that, just as early astronomers were mired in the Ptolemaic system, so too it is often difficult to have intelligent conversations with otherwise intelligent computer scientists about consciousness and cognition. It’s often said, for example, that we’re on the verge of artificial intelligence and that soon “computers will be as powerful as the human mind.”


But as quantum physicist David Deutsch (and many others) recently argued, consciousness is not a matter of computational power. At all. If we had had a functioning theory of mind, we could have created artificial intelligence in 1960, where the most advanced, room-sized machines had less computing power than what you carry in your pocket. It just would have taken a long time to produce a response.


But it’s not like we wouldn’t have waited. Happily.


The problem is not lack of computing power but rather that we don’t have a working theory. The so-called Turing “test” only illustrates this. It punts on the issue completely. It treats the mind as an impenetrable black box — just as creationists treat the evolution of species — which is the antithesis of science, where we seek explanations for things: good, falsifiable explanations. Turing, certainly a genius, couldn’t describe consciousness with the computational theory, despite personally recapitulating the entire history of logic, because the brain is simply not an information processor.


What we are on the verge of now — if anything — is a brute-force mimicking of intelligence analogous to the anthropomorphic, piano-playing automata that were popular with the aristocrats in the late 1700s. Unlike true consciousness, brute force DOES require power. But what will be produced will not be conscious, just as Google’s AlphaGo machine, which recently shellacked the world Go champion, was not conscious, was not being creative. The humans who made it were conscious (and creative). They encoded their creativity in the program and coupled it with a non-conscious algorithm that simply sorted through millions of moves to find the optimal one.


That is not consciousness — or intelligence or self-awareness or whatever you want to call it. And that is not at all what your brain does. (If you want to have a better sense of why that is, read the article.)


Part of the problem is that cognitive science often jumbles two distinct aims. The first is to understand the general phenomenon of consciousness. Towards that end, I suspect computers will ultimately be very useful — as they are now with all kinds of study. What’s more, machines may certainly be self-aware one day. Corvids (ravens) and cephalopods (octopi) seem to have evolved proto-intelligence completely separately from mammals, which suggests there’s any number of ways to get there.


In other words, I’m not saying we won’t make AI. Rather, I’m saying that, given the difficultly of the problem, it seems silly to try to develop a theory of consciousness from scratch when we have a valid, verifiable, real-world case to study — our brains.


And that brings us to the second aim — so often lost in the first but which I believe is ultimately more rewarding — which is to understand our particular kind of intelligence, the specifically human phenomenon mediated by our brains; to build a deeper understanding of ourselves; and to discover paths to increased self-awareness, fulfillment, and happiness, both individually and for the species as a whole.


I’m skeptical we’ll discover that first in a machine.


 


 •  0 comments  •  flag
Share on Twitter
Published on May 24, 2016 11:25

May 9, 2016

Can We Escape Genre?

Like branding, genre is an extended phenotype of our genetics. Humans need to be able to make sense of the world, so the brain developed a small armamentarium of “noise-reduction” shortcuts, almost none of which are aiming at what is really out there since “what is really out there” includes the rigidly uncertain and indeterminably cross-categorizable.


At some point in our evolution, the brain hit the law of diminishing returns and said “while a mere two units of effort can make sense of 50% of what I’m likely to encounter in the world, and seven units of effort can make sense of 80%, it takes a whopping twenty units to hit 90% and 95% is basically impossible. Therefore the most economical solution is to be satisfied with an incomplete setup, to stop at seven units of effort and so to fit 100% of the messy world into a scheme that actually only makes sense of 80% of it and call it a day.”


Genre is a stable system. Certainly it persists, which means a whole lot of people must feel served by it. And that’s okay. There’s nothing wrong with genre. Who doesn’t like a good detective story? The problem is those who turn back at the border. (Hint: I don’t like them.)


After all, stable can also mean rigid. Indeed, despite that genre systems are fluid and do change over time, at any given moment they are also highly conservative if only because it’s almost impossible to mix “framing cues” — the subtle textual signals that instruct you how to read a work — across divergent genres. These are the clues that tell the reader that, for example, when a character gets hit in the head with a blunt object and falls into a river, it’s meant to be funny (“This is a comedy, so you can laugh”) rather than tragic or suspenseful (“This is a crime story, so that is shocking”).


Texts that intentionally mix these cues are either tongue-in-cheek, like Abraham Lincoln: Vampire Hunter, or inescapably surreal, like the recent Adult Swim short film Too Many Cooks, which necessarily hovers in a disembodied space because, like a ghost, it participates incompletely in multiple genre “worlds” simultaneously. Texts that want to mix genre and still tell an engaging story — one an audience can get lost in rather than experience as a spectacle from the other side of the glass — have to assume a primary mode. They can interject. The inclusion of humor at key points can actually heighten the fright of a horror story, for example. But to be experienced as horror, and not farce, the text has to assume that genre’s cues as its dominant mode. It’s has to set itself up as a horror, and to do that it must use signs horror readers already recognize and understand. (Enter the trope.)


I know for a fact that most people who read my stuff do not contemplate much of it as particularly meaningful. I suspect that’s because I don’t adopt a mode that cues the reader to ruminate, to dissect symbols, or indeed even to much notice that they’re there! This is not a bad thing. I want my peeps to be entertained — so much so that they want to come back again — and that means writing in a mode that allows first for the spontaneous experience of fun. Full stop.


But my shit does have some depth to it — not like you’d find in the literature section, of course, but more than you might expect given the near-absence of textural cues signifying “This is symbolic” or “Read this as philosophy.” There’s a reason Xana has her heart removed, though, and why John is physically burned. There’s a reason each course of THE HERETIC ARCANUM is labeled with the color it is. Etude’s prophecy at the end of A Symphony in Green, written last year, basically predicts the Trump phenomenon. Malcolm McDoom’s short speech in Episode Two is a very reasoned, literary indictment of superhero fiction. The whole of THE MINUS FACTION, in fact, is a rumination on “suprapower,” and the coup that takes place at the start of the final episode isn’t (just) an entertaining turn. It’s a statement about ideology and the fate of top-down revolutions.


It’s not that I want people to interrupt the narrative to engage with those things. You can enjoy a glass of wine without knowing a damned thing about tannins or terroir. You don’t need to know the recipe to appreciate a delicious meal. I’m not trying to revolutionize molecular gastronomy (Haruki Murakami) or produce a rare vintage (Anthony Doerr). But then, I’m not trying to be the reliable chain (Lee Child) or trendy midtown bistro (G.R.R. Martin) either. I’m offering a diverse, satisfying meal, which includes both the familiar and the unexpected, the old stalwart and the remix, some art, some science, and a whole lot of flavor, both subtle and profound. I’m offering “experience dining,” a place you’d want to go with your friends so you could awkwardlessly sample each other’s selections.


Or, that’s the goal anyway.


So, wait, what genre is that?


 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2016 08:39

May 2, 2016

What People Are Saying

about the superpowered serial sci-thriller THE MINUS FACTION:


“Gripping, immersive, entertaining. These are not your typical heroes.”


“A unique brand of justice, morality, and heroism. Taut, thrilling entertainment.”


“So easy to read and follow along. The author pulls you right in.”


“Fun, fun, fun!”


The story of unlikely hero[s]. Crackles with energy and promise.

-Daniel Swensen, author of ORISON


“Keeps you guessing, keeps you surprised.”


“Rare that I finish [a book] in one sitting. It’s what I’d call neopulp and what most bookstores would shelf among the sf/f books, but it’s also a superheroic book without capes or masks.”


More than a simple thriller-meets-comic, Wayne gives his characters realism, depth, and heart.

-Andy Goldman, author of THE ONLY CITY LEFT


“Not what you expect and you WILL NOT be disappointed.”


“Unusual and distinctly clever.”


“What I loved the most is the sense of restraint, turning every page was fraught with the feeling that everything might go very bad, very soon. Excellent characters and an intriguing story.“


“I love the that the characters themselves seems to be the driving force behind the story, rather than mere accessories.”


“Grabs you by the throat and keeps you on your toes.”


THE MINUS FACTION “features singular characters, strong voices, believable moral dilemmas, adventure, brutality, beauty, and action. And did I mention the kick-ass writing?

-LJ Cohen, author of DERELICT


***


The first installment is just a hundred pages and is available free on most online retailers.


#1 Bestseller in Action & Adventure/Sci-fi

#1 Bestseller in Fantasy/Superhero

Top 5 in Thrillers


Episode Five: Aftershock, the penultimate chapter, was just released, and here’s what some of the early readers had to say:


“F**k you, you evil bastard. Also, well done.”


“Wow I did not see the end coming.”


“Fast and furious, with just enough slower scenes to let you go before the next!”


“I hate you so much right now.”


“I was totally blown away by the big reveal.”


“[The story is] astounding and devastating.”


“I feel so much for Xana. There’s no one like her in anything you’ve read.”


“I’m drooling.”


“Holy s**t. Actually started it and couldn’t stop reading until I was done. Intense. What a freaking ride.”


“Why isn’t everyone reading this series??”


“I am totally recommending people buy every single book in this series. I loved it that much.”


Five Promo


 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2016 08:02

April 20, 2016

The Anatomy of Excellence in Art + Fiction

Hugo Froelich created the following diagram, The Stages of Conventionalization, in 1905 for Keramic Studio Magazine.


Cicada, Stages of Conventionalization, Hugo Froelich, Keramic Studio Magazine, 1905


It proposes a hierarchy of representation, what’s sometimes called a mode of genre. Impressionism, for example, employs a decorative representation, whereas the fractured faces of Picasso are symbolic. (They are, after all, still recognizable as faces.)


This descriptive hierarchy applies to books as well painting. Indeed, it applies to any medium, including film, sculpture, poetry, and the rest. Even blog posts! The novels of Tolstoy, for example, are iconically realistic. Several of the works of Philip K. Dick, on the other hand, are notably symbolic. (Note, this is different than what we normally call symbolism in prose. Just about every text has symbolism. Not many are symbolic in the way Froelich intends.)


In most cases — the “realist novel” being the exception — the mode and the genre will not be the same. Yet, any genre, be it in painting or prose, will have a mode (or more likely range of modes). Your standard romance novel, for example, is typically decorative because that’s what best accents the tension of the genre: the flutters of love.


But one could, theoretically, write romance by scientific report. In fact, much of the tropes of the genre include situations that would reasonably involve the police, or a reporter, and so you could follow the foibles of a love triangle as told entirely in new articles, police reports, eyewitness accounts, and the like. You could describe the rise and fall of an affair in a series of scientific monographs.


Such gymnastics of form are, to my mind, the lowest category of innovation: the novelty. It IS innovation, certainly, and if someone wanted to write the kind of thing I just described, there would be value in it. But it would likely function first as an oddity — a puzzle box in prose — rather than as high romance, and we would appreciate it first as such, not least because it is difficult for us as readers to experience the flutters of love when shoved into mode that is distant from the characters.


Artists of the Western Canon believed, as they moved progressively down Froelich’s diagram, that they were tapping the vein of the universe, that each move away from the local and contingent was a move toward the essential and universal, such that by the early 20th century, the best artists in the West were producing highly nuanced works with strong tribal or “primitive” overtones — effectively eliminating so much art history, as a seamstress nips a stray loop of thread — and within 50 years reached the epitome of the formative and abstract.


img_5407


(Rothko at MoMA)


They believed that, as we escaped the parochial, we could shed the burdens of our culture and discover, or at least participate, in something superlative, timeless.


In order to understand that, you have to appreciate the existence of the burden. Take this work by contemporary artist Sean Norvet. Aliens visiting earth for the first time would have no idea what to make of it. If they have mastered interstellar travel, I suspect they’d be able to figure it out, but at first it would strike them as a painting of a pile of trash.


Sean Norvet


Think about how much you have to bring with you to unpack this image. You have to know what an aluminum can is, and a milk carton, and salmon steak, and a plaster cast, and a bucket of chicken, and a cigarette, and a lawn chair. But more than that, you also have to know what everything means: that people recline at the beach, for example, usually on vacation, and so this is a person of some wealth; that juveniles inscribe messages on casts of broken bones and these can include logos of bands, like Metallica, and that such bands are associated with a white, middle class, suburban socio-ethnic status; that we call idiots blockheads; that fried chicken is unhealthy; that Jesus was pierced through his hands and feet and thence revered as a savior; that a long cigarette with a dangling ash connotes a casual aloofness, like Hunter Thompson in Las Vegas; that women are objectified for round breasts and full hips; and on and on and on.


More than that, you have to know how each of these things operate at a meta-level, and how the medium of painting is supposed to work, that it’s considered very different from cartoon, for example, whose style Norvet adds (in the drooling eyes, for example, and the spittle below) to ironic effect.


This picture is an indictment of Western consumerism, perhaps even of the culture as a whole, but it only resonates — if at all — if you are already familiar, first, with all of the elements of the composition (those mentioned and many more), but also with the meta-historical-cultural milieu which idolizes the philosophy of decline.


Aliens wouldn’t get that, I suspect, without some explanation. Nor would anyone from, say, a thousand years ago. This is because both groups, looking at the culture that produced this, would struggle to find decline. On just about any measure, our present society surpasses everything that has come before: sanitation, literacy, education, morbidity and mortality, access to medical care, amount of free time, potential for self-actualization, number of active human connections, job mobility, entertainment options, diversity of nutrients and foodstuffs, status of women and minorities, penetration of space, volume of literature, breadth of scientific knowledge, styles of art and music, and so on.


In short, you have to already believe in the myth of decline — unknowingly, and as exactly that: a myth, an unquestioned explanation of the world — for Norvet’s painting to make any sense, and you have to be aware that, in the present social circumstances, it’s considered de rigueur to be pessimistic about history, that any optimism, such as that we could find a solution to the environmental crisis, or the obesity epidemic, is both utopian and decadent, and that the cool kids all think shit’s going down the toilet, man. And fast.


By contrast, consider the following piece by Robert McCall, who had a long career painting for NASA, among others.


Robert McCall A Moment in Time (1991)


I’ll let you dwell for a moment.


I’m not saying that one is right or better, or even more representative. Neither is ever likely to be considered museum quality. But both were produced by professional artists — that is, our society afforded them a living for their work, so they can surely be “read” as contrasting texts of their parent culture.


There is something that leads us to identify with the message of decline in the Norvet piece, despite the objective facts, and to see the work by McCall as perhaps a bit too martial and utopian, even childish, despite its subject matter. THAT KIND OF implicit understanding is what you come with, fully laden, to every novel, every work of art, and that “laden-ness” is (part of) what modern artists have been progressively trying both to illuminate and ultimately transcend — to find the universal, perhaps even the divine of the modern age.


It’s stupid. Really.


It’s stupid because excellence, an emergent quality, doesn’t reside in primary characteristics. It can’t. What is an excellence of blue? Or of round? Nonsense, that’s what.


Now, that’s not to say there’s no value in a Rothko or a Pollack. Of course there is. Some of the work from that era is sublime. But if so, it’s because of the elements that don’t participate in the universal: the careful yet unstructured repetition in the Pollack, not unlike the jockeying of taxis on the street, and Rothko’s calculated fringes, like the fraying of art itself. And none of that is any more essential or sublime than Caravaggio’s mystical use of light, or the peaceful grandeur of the daibutsu Buddha.


Excellence emerges. It arises, like cicadas from the ground, and it need neither be elementary nor abstract. More to the point, the belief that there is ascendancy in abstraction requires a prior commitment to the very ideals that 20th century artists were trying to escape! That there is a place art can occupy where it is free and unencumbered by the circumstances of its manufacture.


Contemporary art, recognizing this fallacy, sort of collapsed in on itself from on high. They’re still sorting through the rubble. I’ll let you know if anything turns up.


The novel fared somewhat better, but then, being tethered to narrative, there was only so far we could go down Froelich’s diagram. There have been abstract and gibberish novels produced, and a rare few — such as the nonsense of Lewis Carroll or the Codex Serafinianus — are excellent, but for the most part, what we want when we experience a book, even a literary or “high art” one, is some kind of story.


Note, the order of the telling is unimportant because we always reconstruct an ordered flow in our head. That is, even if you are clever and tell your story in tangents, or in backwards order, our brain rearranges the pieces into a pro-temporal narrative, which is why so many of those tricks of form, such as Christopher Nolan’s 2000 film Memento, can only ever fall into the category novelty — no subsequent experience can recreate the first.


This year, across much of the Midwest and Midatlantic of North America, cicada Brood V will emerge in fantastic numbers after 17 years underground. By means we don’t yet understand, they will coordinate their emergence for the exact opposite reason that the artist paints or the writer writes: so as not to be noticed. By appearing in the millions at the same time, the cicadas saturate their predators. All the animals that eat them will quickly have their fill, leaving the vast majority of insects safely anonymous even as they chatter loudly for a mate.


It is crucial to note that this emergent act, this orgy of fertility, also marks their death. After 17 years as a grub, their eruption, transformation, and reproduction is the very last thing they do.


This year also, something like one-and-a-half million new books will be produced. Like a brood of cicadas, each will chatter for attention in a deafening crowd. But where the red-eyed insects want to avoid being eaten, the red-eyed writer does not. We WANT people to consume our work. We WANT to be found. And we’d rather it not be on the last day of our lives.


The best way to emerge, I believe, is not through a novelty of form or a transcending abstraction or any conscious participation in the “literary” or “artistic,” but rather through an uncommon excellence in the common mode of your time and place. And you can do that anywhere in any genre. Write something that rewards, in multiples, the time it takes to read it.


I have a new book coming out next week. It is, as I’ve said each of the last few times, among my best to-date.  The first in the series is permanently free (in electronic format) either through Amazon or my website. The last in the series comes out this summer. Each installment is short — less than 200 pages — and not like anything you’ve read before.



Alt Episode 1
Alt Episode 2
Alt Episode 3
Alt Episode 4
Alt Episode 5
 •  0 comments  •  flag
Share on Twitter
Published on April 20, 2016 05:26

April 14, 2016

In which I ramble about art & criticism

I feel like it should be kind of taboo for a writer to talk about the beta reading experience, except in very general ways. I mean, beta readers are giving up their free time — which I am certainly stingy with myself — to offer free criticism on a manuscript. And it’s absolutely vital, especially for people serious about achieving excellence, and given how reticent almost everyone is about giving critical feedback, I wouldn’t want to do anything that jeopardizes engagement.


All that to say, I apologize if this is a little vague.



But beta feedback is really difficult to sort through. In the first place, everyone’s experience of the text is valid. And I don’t mean that in a New Age-y, touchy-feely way. With books, even more than with movies or even comics (although those too), the experience is necessarily personal. So much of what is passively presented on film must be “unpacked” by the reader. Yet, there are only so many things an author can say about the setting, for example, before the description starts to get tedious. Good writers must be both parsimonious and artful, which is what Hemingway was getting at when he said you didn’t need to tell people about the boat or the fisherman or the water. Just tell them about the glint of dew on the fishing line, and they would bring all the rest with them for “free.”A book is not a passive medium, like a TV. It has to be read, to be stretched upon the canvas of the mind, and the narrative is built out of that based on the reader’s own experiences and expectations. It’s akin to a reader staging their own play with their own sets and cast. It’s its own creative act, and so it is necessarily valid. (Or at least it is not _in_valid.)


But then, if each reader’s experience is valid, it means no one reader’s experience is canon, and so writers must become critical readers of their own critical feedback. Just as each beta reader unpacks the text in their head, so each writer must do the same with criticism. They have to infer and construct the reader’s critical milieu in their heads based on cues and experience.


For example, I am usually able to tell who didn’t engage with the story. I can’t describe this feeling in a few sentences, but it’s usually quite clear. There is a kind of detached language coupled with an odd preoccupation with minute detail. This person did not unpack the narrative in their mind; they studied it at arm’s length.


I usually discount such feedback. As much as I appreciate the time and effort — truly, per my comments above — it’s just not very useful to me. Not every reader is going to be a reader of mine, and I am not trying to make a book everyone will enjoy, not least because no such thing is possible. I’m trying to write a book that “readers of mine” would enjoy, where that class is both fluid and fuzzy at the edges.


Now, I said “usually” because it does happen that people who would otherwise be “readers of mine,” based on earlier experience, don’t engage with the story, which is what happened with the first draft of Episode Three, which is why I took the time to completely re-write it.


And then there’s the fact that if you ask people for criticism, they’re going to find things to criticize. It’s a lesson I learned in my corporate career working with lawyers and auditors, whose job it is to pick at things. A healthy business can become sickly merely by hiring a lawyer. (I once asked a VP at Ernst & Young if, in 35 years, he had ever produced a clean audit — that is, NOT found any issue — and he just smiled and said “Well, that’s not what you do, is it?”)


That being said, as a writer you have asked for feedback, so it makes no sense to simply turn around and ignore it once it comes. It’s just that deciding what to do with it isn’t always easy.


Neil Gaiman once said that when people tell you something doesn’t work, they are usually right, but when they tell you how to fix it, they are usually wrong. I think that’s true, but only in the aggregate. “Doesn’t work” is defined in the plural. Multiple people need suggest a problem for it not to be idiosyncratic.


Thus, regardless of what appears in my inbox — and for those who’ve never solicited beta readers, responses range from “I really liked this story and I feel bad because I don’t know what else to say” to “Here is an itemized list of everything wrong with the book compete with page numbers for reference and suggestions for how to fix it!” — beta feedback is not any of that. It is NOT what beta readers have typed. It is, like the experience of a novel in the reader’s mind, a virtual entity constructed in my head from my readings of the complete volume of feedback, and it may include all or none of what any individual reported.


This is, like anything else, something one can either be good or bad at, and I suspect those who are better at it, naturally, are the ones producing better books. And so, as arrogant as it may sound, I must conclude that grows from the talent of the author, not the quality of the feedback.


That is, critics don’t make artists better. Better artists grow out of criticism.




 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2016 22:40