Lily Salter's Blog, page 978

October 18, 2015

This is the one change by millennials that will change absolutely everything

Let’s take a look at the era that began in 2001, when the first Millennials graduated college, got jobs, and started families. Eight years later, in 2009, Millennials drove 23 percent fewer miles on average than their same-age predecessors did in 2001. That is, their average mileage—VMT, or vehicle miles traveled—plummeted from 10,300 miles a year to 7,900, a difference of 2,400 miles a year, or 46 fewer miles a week. It’s not that they stopped traveling. While Millennials made 15 percent fewer trips by car, they took 16 percent more bike trips than their same-age predecessors did in 2001, and their public–transit passenger miles increased by a whopping 40 percent. That’s 117 more miles annually biking, walking, or taking public transit than their same-age predecessors used in 2001. When a cohort of the size of the Millennial generation changes behavior that radically, it’s a little like what happens when a third of the people on board a ferry decide to move from starboard to port: the entire boat starts to list. Which is what is happening to the United States. In every five-year period from 1945 to 2004, Americans had driven more miles than they did the half-decade before. In 2004, the average American drove 85 percent more than in 1970. But by 2011, the average American was driving 6 percent fewer miles than in 2004. Baby Boomers and Gen Xers were a small part of the reason—they drove somewhat less in 2009 than in 2001—but the big cause was the Millennials. What makes this even more dramatic is that, by 2009, only half the Millennial generation was even out of high school. If all eighty million Millennials retain their current driving habits for the next twenty-five years, the US population will increase by 21 percent, but total VMT will be even less than it is today, and per capita VMT—the vehicle miles traveled per person—will fall off the table. Some of the consequences of what happened are unambiguously positive. Americans spent 421 million fewer hours stuck in traffic in 2011 than they did in 2005. For the first time, the number of cars being “retired” is actually greater than the number of new cars being sold. Other results are more complicated. When the irresistible force of the Millennials hits the immovable object of America’s car-centric transportation infrastructure, there are going to be a lot of very interesting side-effects. Gas consumption in 2014 was at a ten-year low, which is definitely a good thing for anyone who thinks that US foreign policy ought not to be driven by the need to secure sources of petroleum in dangerous parts of the world. But it’s also the reason the Highway Trust Fund was on the brink of bankruptcy in 2014: less gas purchased means fewer gas tax dollars for roadways. In the same way, Millennial housing choices are revitalizing thousands of neighborhoods that were built before the convenience of automobile drivers became paramount, but are leaving a lot of suburban housing stock behind. In 2006—before the crash of 2008—urban planner Arthur C. Nelson wrote an article in the Journal of the American Planning Association that estimated that, by 2025, the United States will have 22 million unwanted large-lot suburban homes. Housing values. Energy policy. Health costs. Taxes. The future of the car business. It’s probably not possible to list all the implications of the Millennial turnaround on cars and driving. But there’s one big question that can’t be avoided: Why? Why, for the first time since the Model T, are Americans less interested in driving? There are dozens of answers to those questions in wide circulation among policy wonks, urban historians, and transportation engineers—some of them better than others. One not-so-persuasive reason that I hear a lot is the economic one. In this version, the reason for the dramatic drop-off in driving among Millennials is the recession of 2008, which was not only the worst financial crisis since the Great Depression but hit the Millennials especially hard. If you finished college in 2008 or 2009, you (1) were almost certainly a Millennial and (2) had a really hard time finding a decent job. Meanwhile, it was true that the price of a new or used car held pretty steady during the years after 2008, and, because interest rates declined even faster than per capita VMT, the real cost of buying a car actually declined, at least for buyers who could get a car loan. However, the price of gasoline increased substantially. Between 2001 and 2010, in fact, the average American’s bill for filling up the tank increased from $1,100 to $2,300 (in 2011 dollars). In this scenario, driving less was just a rational, and temporary, expedient. It makes sense. Except that the decline in VMT among Millennials—and everyone, really—began in 2004. And it has continued through 2014, long after the worst effects of the Great Recession have passed. It’s not that economic downturns don’t affect driving behavior, it’s that once the downturn is over, Americans have always returned to their cars. But not this time. It’s not the economy, stupid. Nor can the Millennials’ choices be explained away by college debt. Though recent college graduates are likely to have borrowed more money than previous generations to pay for their diplomas (and the amounts in question are larger than ever) there is no data showing a correlation between the amount of debt owed and the debtor’s VMT. Nor is environmentalism the cause. In a 2011 poll, only 16 percent of Millennials strongly agreed with the statement, “I want to protect the environment, so I drive less.” So if it isn’t the recession, or debt, or environmentalism, then what has completely transformed the minds of a significant portion of a very large generation? A more plausible reason for the sea change in Millennial behavior is that they are the first generation that started driving in the age of graduated driver licensing statutes. In 1996, the year the first Millennials were turning fifteen and sixteen, Florida enacted America’s first comprehensive GDL program, which broke the process of getting a driver’s license into stages. At the first stage, a learner’s permit was granted upon passing a written driving exam, and the licensee was required to take a state-sanctioned driving course, frequently one that cost $500 or more. The second stage offered the new driver, after completion of a road test, an intermediate license that restricted driving in substantial ways. Only after completing the first two stages was a full license available. The GDL laws decreased new drivers’ mobility in order to increase their safety, and they worked so well—fatal crashes involving sixteen-year-old drivers dropped by a quarter between 1995 and 2005—that every state now has its own version. And GDL programs don’t just delay driving; in many cases they reduce it permanently, since history shows that, if drivers haven’t gotten licensed by the time they’re twenty, they’re unlikely ever to do so. Anything that slows down the process of licensing between the ages of sixteen and twenty, or raises its costs, can have a very long tail of consequences. This one sure does; a study from the AAA Foundation for Traffic Safety revealed that only 44 percent of teenagers obtain a driver’s license within a year of becoming eligible for one. According to the Federal Highway Administration, only 46.3 percent have a license by the time they turn nineteen. In 1998, the number was 64.4 percent. An even bigger reason for the decline in VMT among Millennials isn’t economic, or even statutory. It’s digital. The Internet, and the spectrum of technologies that have been developed to exploit it commercially, has changed everything from the way we buy groceries to the way we find romantic partners. It is no big surprise, then, to find that it’s changed the way we get from place to place, too. One way it’s changed our mobility patterns—and by “our” I mean anyone who has ever bought anything from Amazon, eBay, or WalMart.com—is by changing the way we shop. By the time you read these words, online shopping will account for at least 9 percent of all retail sales in the United States: more than $300 billion in 2014, up from “only” $134 billion in 2007. Not only are 190 million Americans hitting “buy” buttons on a regular basis, but they’re spending more and more every year, for an average of more than $1,700 annually. That huge diversion of consumer buying dollars from in-store to in-home has implications for the average American’s VMT, but their magnitude is a little unclear. One of the best studies estimates that every one hundred minutes spent shopping from home is associated with five fewer minutes in shopping travel time and a one-mile reduction in distance traveled. Every six hours spent shopping online substitutes for one entire shopping trip. However, while the growth of online shopping is clearly reducing shopping travel overall, it’s not so obvious that Millennials are more affected by the phenomenon than their Baby Boomer and Gen X parents. The real impact of the Internet on Millennial transportation choices is someplace else. One of those “someplace else” possibilities is that the digital revolution affects travel for socializing as much or more than it does shopping for travel. A 2011 survey by KRC Research asked different age groups whether they “sometimes choose to spend time with friends [via social media] instead of driving to see them.” Only 18 percent of Baby Boomers answered “yes.” Millennials? Fifty-four percent. The number one transportation trend identified by Millennials in a 2014 survey was “socializing while traveling.” So the big impact of the Internet might not be that it makes driving less essential, but that it makes other transportation options, particularly transit, more appealing. Millions of people of all ages have grown to rely on 24/7 access to the Internet, whether they’re looking up a movie on IMDB while simultaneously watching it, or following a baseball game in real time on ESPN, or obsessively checking for Facebook updates, e-mails, and texts. But no one depends on that kind of access more than Millennials, or is more likely to feel unsettled when they can’t have it. You can text on the bus or the train, but—hopefully—not while driving. Even better, you can do nearly everything on a hands-free transit option that you can do at home, including checking out the transit options themselves. That’s because the characteristic that really distinguishes public transit from the automobile is that transit delivers service according to regular schedules. One thing the Internet does unambiguously well is to make information that used to be expensive and scarce now cheap and abundant. You don’t have to spend ten years learning the commuting ropes to know whether the train or bus you’re on is an express or a local, or even when it’s going to show up. You just need a smartphone. Smartphones are also all that’s needed to take advantage of other revolutionary new transportation options: ride-sharing services like Via, car-sharing like Zipcar, and— especially—dispatchable taxi services like Uber and Lyft. These and other cool new businesses didn’t create Millennial distaste for driving. But they did exploit it. Excerpted from "Street Smart: The Rise of Cities and the Fall of Cars" by Sam Schwartz. Reprinted with permission from PublicAffairs. Copyright © 2015 by Sam Schwartz. All rights reserved.Let’s take a look at the era that began in 2001, when the first Millennials graduated college, got jobs, and started families. Eight years later, in 2009, Millennials drove 23 percent fewer miles on average than their same-age predecessors did in 2001. That is, their average mileage—VMT, or vehicle miles traveled—plummeted from 10,300 miles a year to 7,900, a difference of 2,400 miles a year, or 46 fewer miles a week. It’s not that they stopped traveling. While Millennials made 15 percent fewer trips by car, they took 16 percent more bike trips than their same-age predecessors did in 2001, and their public–transit passenger miles increased by a whopping 40 percent. That’s 117 more miles annually biking, walking, or taking public transit than their same-age predecessors used in 2001. When a cohort of the size of the Millennial generation changes behavior that radically, it’s a little like what happens when a third of the people on board a ferry decide to move from starboard to port: the entire boat starts to list. Which is what is happening to the United States. In every five-year period from 1945 to 2004, Americans had driven more miles than they did the half-decade before. In 2004, the average American drove 85 percent more than in 1970. But by 2011, the average American was driving 6 percent fewer miles than in 2004. Baby Boomers and Gen Xers were a small part of the reason—they drove somewhat less in 2009 than in 2001—but the big cause was the Millennials. What makes this even more dramatic is that, by 2009, only half the Millennial generation was even out of high school. If all eighty million Millennials retain their current driving habits for the next twenty-five years, the US population will increase by 21 percent, but total VMT will be even less than it is today, and per capita VMT—the vehicle miles traveled per person—will fall off the table. Some of the consequences of what happened are unambiguously positive. Americans spent 421 million fewer hours stuck in traffic in 2011 than they did in 2005. For the first time, the number of cars being “retired” is actually greater than the number of new cars being sold. Other results are more complicated. When the irresistible force of the Millennials hits the immovable object of America’s car-centric transportation infrastructure, there are going to be a lot of very interesting side-effects. Gas consumption in 2014 was at a ten-year low, which is definitely a good thing for anyone who thinks that US foreign policy ought not to be driven by the need to secure sources of petroleum in dangerous parts of the world. But it’s also the reason the Highway Trust Fund was on the brink of bankruptcy in 2014: less gas purchased means fewer gas tax dollars for roadways. In the same way, Millennial housing choices are revitalizing thousands of neighborhoods that were built before the convenience of automobile drivers became paramount, but are leaving a lot of suburban housing stock behind. In 2006—before the crash of 2008—urban planner Arthur C. Nelson wrote an article in the Journal of the American Planning Association that estimated that, by 2025, the United States will have 22 million unwanted large-lot suburban homes. Housing values. Energy policy. Health costs. Taxes. The future of the car business. It’s probably not possible to list all the implications of the Millennial turnaround on cars and driving. But there’s one big question that can’t be avoided: Why? Why, for the first time since the Model T, are Americans less interested in driving? There are dozens of answers to those questions in wide circulation among policy wonks, urban historians, and transportation engineers—some of them better than others. One not-so-persuasive reason that I hear a lot is the economic one. In this version, the reason for the dramatic drop-off in driving among Millennials is the recession of 2008, which was not only the worst financial crisis since the Great Depression but hit the Millennials especially hard. If you finished college in 2008 or 2009, you (1) were almost certainly a Millennial and (2) had a really hard time finding a decent job. Meanwhile, it was true that the price of a new or used car held pretty steady during the years after 2008, and, because interest rates declined even faster than per capita VMT, the real cost of buying a car actually declined, at least for buyers who could get a car loan. However, the price of gasoline increased substantially. Between 2001 and 2010, in fact, the average American’s bill for filling up the tank increased from $1,100 to $2,300 (in 2011 dollars). In this scenario, driving less was just a rational, and temporary, expedient. It makes sense. Except that the decline in VMT among Millennials—and everyone, really—began in 2004. And it has continued through 2014, long after the worst effects of the Great Recession have passed. It’s not that economic downturns don’t affect driving behavior, it’s that once the downturn is over, Americans have always returned to their cars. But not this time. It’s not the economy, stupid. Nor can the Millennials’ choices be explained away by college debt. Though recent college graduates are likely to have borrowed more money than previous generations to pay for their diplomas (and the amounts in question are larger than ever) there is no data showing a correlation between the amount of debt owed and the debtor’s VMT. Nor is environmentalism the cause. In a 2011 poll, only 16 percent of Millennials strongly agreed with the statement, “I want to protect the environment, so I drive less.” So if it isn’t the recession, or debt, or environmentalism, then what has completely transformed the minds of a significant portion of a very large generation? A more plausible reason for the sea change in Millennial behavior is that they are the first generation that started driving in the age of graduated driver licensing statutes. In 1996, the year the first Millennials were turning fifteen and sixteen, Florida enacted America’s first comprehensive GDL program, which broke the process of getting a driver’s license into stages. At the first stage, a learner’s permit was granted upon passing a written driving exam, and the licensee was required to take a state-sanctioned driving course, frequently one that cost $500 or more. The second stage offered the new driver, after completion of a road test, an intermediate license that restricted driving in substantial ways. Only after completing the first two stages was a full license available. The GDL laws decreased new drivers’ mobility in order to increase their safety, and they worked so well—fatal crashes involving sixteen-year-old drivers dropped by a quarter between 1995 and 2005—that every state now has its own version. And GDL programs don’t just delay driving; in many cases they reduce it permanently, since history shows that, if drivers haven’t gotten licensed by the time they’re twenty, they’re unlikely ever to do so. Anything that slows down the process of licensing between the ages of sixteen and twenty, or raises its costs, can have a very long tail of consequences. This one sure does; a study from the AAA Foundation for Traffic Safety revealed that only 44 percent of teenagers obtain a driver’s license within a year of becoming eligible for one. According to the Federal Highway Administration, only 46.3 percent have a license by the time they turn nineteen. In 1998, the number was 64.4 percent. An even bigger reason for the decline in VMT among Millennials isn’t economic, or even statutory. It’s digital. The Internet, and the spectrum of technologies that have been developed to exploit it commercially, has changed everything from the way we buy groceries to the way we find romantic partners. It is no big surprise, then, to find that it’s changed the way we get from place to place, too. One way it’s changed our mobility patterns—and by “our” I mean anyone who has ever bought anything from Amazon, eBay, or WalMart.com—is by changing the way we shop. By the time you read these words, online shopping will account for at least 9 percent of all retail sales in the United States: more than $300 billion in 2014, up from “only” $134 billion in 2007. Not only are 190 million Americans hitting “buy” buttons on a regular basis, but they’re spending more and more every year, for an average of more than $1,700 annually. That huge diversion of consumer buying dollars from in-store to in-home has implications for the average American’s VMT, but their magnitude is a little unclear. One of the best studies estimates that every one hundred minutes spent shopping from home is associated with five fewer minutes in shopping travel time and a one-mile reduction in distance traveled. Every six hours spent shopping online substitutes for one entire shopping trip. However, while the growth of online shopping is clearly reducing shopping travel overall, it’s not so obvious that Millennials are more affected by the phenomenon than their Baby Boomer and Gen X parents. The real impact of the Internet on Millennial transportation choices is someplace else. One of those “someplace else” possibilities is that the digital revolution affects travel for socializing as much or more than it does shopping for travel. A 2011 survey by KRC Research asked different age groups whether they “sometimes choose to spend time with friends [via social media] instead of driving to see them.” Only 18 percent of Baby Boomers answered “yes.” Millennials? Fifty-four percent. The number one transportation trend identified by Millennials in a 2014 survey was “socializing while traveling.” So the big impact of the Internet might not be that it makes driving less essential, but that it makes other transportation options, particularly transit, more appealing. Millions of people of all ages have grown to rely on 24/7 access to the Internet, whether they’re looking up a movie on IMDB while simultaneously watching it, or following a baseball game in real time on ESPN, or obsessively checking for Facebook updates, e-mails, and texts. But no one depends on that kind of access more than Millennials, or is more likely to feel unsettled when they can’t have it. You can text on the bus or the train, but—hopefully—not while driving. Even better, you can do nearly everything on a hands-free transit option that you can do at home, including checking out the transit options themselves. That’s because the characteristic that really distinguishes public transit from the automobile is that transit delivers service according to regular schedules. One thing the Internet does unambiguously well is to make information that used to be expensive and scarce now cheap and abundant. You don’t have to spend ten years learning the commuting ropes to know whether the train or bus you’re on is an express or a local, or even when it’s going to show up. You just need a smartphone. Smartphones are also all that’s needed to take advantage of other revolutionary new transportation options: ride-sharing services like Via, car-sharing like Zipcar, and— especially—dispatchable taxi services like Uber and Lyft. These and other cool new businesses didn’t create Millennial distaste for driving. But they did exploit it. Excerpted from "Street Smart: The Rise of Cities and the Fall of Cars" by Sam Schwartz. Reprinted with permission from PublicAffairs. Copyright © 2015 by Sam Schwartz. All rights reserved.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2015 11:00

Ted Cruz has lost his mind: “Republican leadership are the most effective Democrat leaders we’ve ever seen”

This is the way reality looks to Ted Cruz: The deeply conservative leadership of the Republican Party just isn't deeply conservative enough. On "Meet the Press" this morning, Cruz actually tried to make this case. Yes, with a straight face. Republican majorities in Congress, he suggested, have actually partnered and compromised and cooperated with President Obama and his agenda too much.
In fact, what the Republican majorities have done, we came back right after the last election, passed a trillion dollars cromnibus bill, filled with corporate welfare reform.  Then Republican leadership and-- and leadership joined with Harry Reid and the Democrats to do that. Then leadership voted to fund Obamacare.  Then they voted to fund amnesty.  Then they voted to fund Planned Parenthood.  And then Republican leadership took the lead confirming Loretta Lynch as Attorney General.  Now Chuck, which one of those decisions is one iota different than what would happened under Harry Reid and the Democrats?  The truth of the matter is Republican leadership are the most effective Democrat leaders we've ever seen.  They've passed more Democratic priorities than Harry Reid ever could.
Here's the video: Cruz also argued that Donald Trump has been "immensely beneficial for our campaign. ... (T)he reason is he's framed the central issue of this Republican primary as who will stand up to Washington?  Well, the natural follow up if that's the question is who actually has stood up to Washington?  Who has stood up to both Democrats and to leaders in their own party?

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2015 10:43

Journalism is too hard for Hollywood: Dan Rather, George W. Bush & the misunderstood media scandal of the century

It's no surprise that "Truth," the new movie about the journalistic firestorm that took down Dan Rather, has caused so much controversy. In its retelling of the scandal that engulfed Rather, his producer Mary Mapes and CBS News after their botched story about George W. Bush's time in the Texas Air National Guard blew up in their faces, "Truth" touches on one of the most politically and personally charged moments in recent media history. CBS is incensed enough about the film that it has refused to air any ads for it on its network. The people behind "Truth" have defended it as a movie that seeks to go beyond the particulars of the saga and raise broader questions about reporting and corporate America.

The reality is that neither side has covered itself in glory. CBS is guilty of, at the very least, massive corporate overreach, but "Truth" is so conspicuously one-sided that it was bound to provoke a heated reaction.

The outline of the story "Truth" tells is familiar enough to anybody who was around in 2004. Mapes, a star producer for Rather at CBS, got her hands on memos that seemingly confirmed rumors about the special treatment Bush had received during his military days, and built a "60 Minutes" report for Rather around them. The memos, however, were torn to shreds almost instantly, with both conservative bloggers and experts disputing their authenticity. CBS ultimately retracted the story, apologized, fired Mapes, made three other producers resign, and forced Rather out of his anchor chair at the "CBS Evening News."

Ever since then, the toxicity of the scandal has lingered. Rather and Mapes insist to this day that the central contention of their story--that Bush shirked his military duties and got away with it because of his connections--is true, and that they were railroaded by a panicked team of corporate suits. Mapes recently said the errors fell within the range of "normal journalistic bungle."

That, of course, is a quite generous reading of events. When you mess up on a highly damaging story about the president of the United States two months before an election, you're out of the realm of normalcy, no matter how much you did or didn't bungle. But that's the story "Truth," which was based on Mapes's memoir, wants to tell. And that's the biggest flaw in the movie.

Delve even slightly into the "Rathergate" scandal and you will tumble down several simultaneous rabbit holes. The story of Bush's military service is decidedly murky. There's a boatload of evidence to suggest that he received preferential treatment for years, but that evidence is tied up with so much history, hearsay and rumor that it's a far cry from the tidy report that "60 Minutes" presented to the world.

"Truth" does not exactly shy away from detailing the mistakes Mapes and her team made in pursuing the Bush story, but it definitely soft-pedals them. There's a central problem that, despite the film's best efforts, it can't overcome: The "60 Minutes" report was partially centered around documents that the producers couldn't reasonably authenticate. It's all well and good for Rather and Mapes to complain, as they have done for over a decade, that the focus on the memos obliterated any consideration of the rest of their story, which included on-camera interviews with people who said they had intervened to help Bush out during his time in the military. But that is a problem entirely of their own making. All these years later, it remains a wonder that so much caution was abandoned on such a sensitive story.

The report that CBS commissioned after the scandal has itself proven contentious, but it makes clear that the producing team barreled past a series of red flags about the documents in its rush to get the story on air. "Truth" somewhat acknowledges this, but moves past it in its effort to cast Mapes and Rather as noble victims of a corporate purge. In doing so, it weakens its own cause.

It's hard for us to take the very pertinent questions the film raises about the connections between CBS and the Bush administration--as well as its broader points about the sanitizing of TV news--when the nagging problem of its hagiographic storytelling keeps intervening.

In a way, the biggest letdown of "Truth" is that it fails to grapple with some of the more mundane issues that the Rathergate mess illuminates. Apart from anything else, the scandal should remind us of the inherent limitations of broadcast news. Television demands dramatic revelations and firm conclusions. A news interview has to point conclusively in one direction. It is not enough to merely raise questions. Mapes and her team did not just err because their journalistic eyes were too big for their stomachs. They were also trying to stuff an unwieldy, muddy story into a neat 13-minute package, because that's exactly what "60 Minutes" is supposed to do.

Content aside, "Truth" holds few shockers as a piece of filmmaking. It's an almost old-fashioned piece, sturdy, formally conservative and unsubtle. It's never boring—though it's about 25 minutes too long—but it never reaches past any of the familiar tropes of this kind of movie. Writer/director James Vanderbilt lets his actors, especially Cate Blanchett, who plays Mapes, and Robert Redford, who plays Rather, shoulder the burden of the work. The results are a mixed bag. As Mapes, Blanchett is typically electric, a wounded bird of prey whose life spirals out of control as her report unravels. Blanchett is never the most relaxed of performers, and her intensity is a natural fit for the hard-charging Mapes.

Redford is, well, Redford, so Rather comes off as the saintliest of saints—an almost amusingly worshipful take on one of the more controversial and psychologically complex icons of journalism. The film only hints at Rather's almost total lack of involvement in the architecture of the doomed report, turning him into a reassuring father figure who honorably goes down with the ship. It's no wonder Rather and Redford have been doing interviews together.

Despite its title, "Truth" does little to get at the truth of what happened at CBS News—or, for that matter, during George W. Bush's wayward youth. It was probably impossible for anything to really do that, of course. There are too many disputed stories, too much bad blood for that. But it's quite disappointing that something better wasn't made out of a such a compelling story.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2015 09:00

America enabled radical Islam: How the CIA, George W. Bush and many others helped create ISIS

Since 1980, the United States has intervened in the affairs of fourteen Muslim countries, at worst invading or bombing them. They are (in chronological order) Iran, Libya, Lebanon, Kuwait, Iraq, Somalia, Bosnia, Saudi Arabia, Afghanistan, Sudan, Kosovo, Yemen, Pakistan, and now Syria. Latterly these efforts have been in the name of the War on Terror and the attempt to curb Islamic extremism. Yet for centuries Western countries have sought to harness the power of radical Islam to serve the interests of their own foreign policy. In the case of Britain, this dates back to the days of the Ottoman Empire; in more recent times, the US/UK alliance first courted, then turned against, Islamists in Afghanistan, Iraq, Libya, and Syria. In my view, the policies of the United States and Britain—which see them supporting and arming a variety of groups for short-term military, political, or diplomatic advantage—have directly contributed to the rise of IS. Supporting the Caliphate The Turkish Ottoman Empire was, for centuries, the largest Muslim political entity the world has ever known, encompassing much of North Africa, southeastern Europe, and the Middle East. From the sixteenth century onwards, Britain not only championed the Ottoman Empire but also supported and endorsed the institution of the caliphate and the Sultan’s claim to be the caliph and leader of the ummah (the Muslim world). Britain’s support for the Ottoman Caliph—a policy known as the Eastern Question—was entirely motivated by self-interest. Initially this was so the Ottoman lands would act as a buffer against its regional imperial rivals, France and Russia; subsequently, following the colonization of India, the Ottoman territories acted to protect Britain’s eastward trade routes. This support was not merely diplomatic; it translated into military action. In the Crimean War (1854–56), Britain fought with the Ottoman Empire against Russia and won. It was only with the onset of the First World War in 1914 that this 400-year-old regional paradigm unraveled. When Mehmed V sided with the Germans, Britain was reluctantly excluded from dealing with the caliphate’s catchment of over 15 million Muslims, reasoning that “whoever controlled the person of the Caliph, controlled Sunni Islam.” London decided that an Arab uprising to unseat Mehmed would enable them to reassign the role of caliph to a trusted and more malleable ally: Hussein bin Ali Hussein, the sherif of Mecca and a direct descendant, it is claimed, of the Prophet Muhammad. The British employed racism to garner support for the uprising, appealing to the Arabs’ sense of ownership over Islam, which had originated in Mecca and Medina, not among the Turks of Constantinople. A 1914 British proclamation declared, “There is no nation among the Muslims which is now capable of upholding the Islamic Caliphate except the Arab nation.” A letter was dispatched to Sherif Hussein, fomenting his ambition and suggesting, “It may be that an Arab of true race will assume the Caliphate at Mecca or Medina” (Medina being the seat of the first caliphate after the death of the Prophet). Again, the British were prepared to defend the caliphate with the sword, promising to “guarantee the Holy Places against all external aggression.” It is a strange thought that, just 100 years ago, the prosecutors of today’s War on Terror were promising to restore the Islamic caliphate to the Arab world and defend it militarily. The Arab Revolt against the Ottoman Empire, fomented by the British, got underway in 1916, the same year that the infamous Sykes-Picot Agreement was made in secret, carving up between the British and French the very lands Sherif Hussein had been promised. Betrayal, manipulation, and self-interest were, and remain, the name of the game when it comes to Western meddling in the Middle East. The revolt would last two years and was a major factor in the fall of the Ottoman Empire. At the same time, the British Army and allied forces, including the Arab Irregulars, were fighting the Ottomans on the battlefields of the First World War. A key figure in these battles was T. E. Lawrence, who became known as Lawrence of Arabia because of the loyalty he engendered in the hearts of Sherif Hussein and his son, Emir Faisal. He was given the status of honorary son by the former, and he fought under the command of the latter in many battles, later becoming Faisal’s advisor. When the Ottomans put a £15,000 reward on Lawrence’s head, no Arab was tempted to betray him. Sadly this honorable behavior and respect were not reciprocated. In a memo to British intelligence in 1916, Lawrence described the hidden agenda behind the Arab uprising: “The Arabs are even less stable than the Turks. If properly handled they would remain in a state of political mosaic, a tissue of small jealous principalities, incapable of cohesion . . . incapable of co-ordinated action against us.” In a subsequent missive he explained, “When war broke out, an urgent need to divide Islam was added. . . . Hussein was ultimately chosen because of the rift he would create in Islam. In other words, divide and rule.” Oil Security and Western Foreign Policy Let us fast-forward to the 1950s and ’60s, by which time oil had become a major factor in the West’s foreign policy agenda. Again, the principle of “divide and rule” was put to work: a 1958 British cabinet memo noted, “Our interest lies . . . in keeping the four principal oil-producing areas [Saudi Arabia, Kuwait, Iran, and Iraq] under separate political control.” The results of this policy saw the West arming both sides in the Iran-Iraq war—which brought both powers to the brink of total destruction in the 1980s—and then intervening militarily with a force of almost 700,000 men in the First Gulf War (to prevent Iraq annexing Kuwait) in 1990–91. The United States, UK, and European powers were also deeply troubled by the cohesive potential of Arab Nationalism, a hugely popular movement led by Egypt’s Gamal Abdel Nasser and his (at that time) mighty allies in Iraq and Syria. The idea of these three huge, left-leaning regional powers becoming politically and militarily united was unacceptable in the Cold War context and remained so after the fall of the Soviet Empire because of the regional threat to Israel. To counteract the rise of pan-Arabism, the West began to support Islamist tendencies within each country—mostly branches of the Muslim Brotherhood—and also worked hard in the diplomatic field to create strong and binding relationships with Islamic, pro-Western monarchies in Saudi Arabia, the Gulf States, and Jordan. These relationships endure to this day. The most extreme manifestation of radical Sunni Islam was Saudi Arabia’s Wahhabism, which it had started to disseminate via a string of international organizations and its self-designated Global Islamic Mission. In 1962, Saudi Arabia oversaw the establishment of The Muslim World League, which was largely staffed by exiled members of the Egyptian Muslim Brotherhood. The Muslim Brotherhood’s relationship with the West (and with the Gulf monarchies) has always been inconsistent and entirely selfish. In the run-up to, during, and after, the 2011 “Arab Spring” revolution against Hosni Mubarak, the United States and UK were actively supporting the Muslim Brotherhood as the most credible (or only) experienced political entity. In 2014, both countries came under pressure from the Saudis to declare the Muslim Brotherhood a terror group: though neither has yet gone that far, the UK duly launched an official investigation into the group, headed by UK Ambassador to Saudi Arabia, Sir John Jenkins, while in the United States a bill was introduced in Congress, the Muslim Brotherhood Terrorist Designation Act of 2014. The House of Saud itself feared an “Arab Spring” revolution and encouraged and applauded the June 2013 coup that deposed the Brotherhood’s legitimately elected President Morsi; Saudi King Abdullah phoned coup leader al-Sisi (now the Egyptian president) within hours to congratulate him on his success. Egypt under al-Sisi would prove a better friend to Israel and, like Saudi Arabia, would brutally extinguish any new uprisings, giving the kingdom moral support in its own battle for survival. Saudi political pragmatism (or, as some might frame it, hypocrisy) has been progressively informed by its close relationship with the United States and UK— and is now one of the most significant drivers of the Middle East’s present chaos, including the emergence of ISIS. Communism: The First Public Enemy Number One From the 1950s on, the Muslim Brotherhood was supported and funded by the CIA. When Nasser decided to stamp out the movement in Egypt, the CIA helped its leaders migrate to Saudi Arabia, where they were assimilated into the Wahhabi kingdom’s own particular brand of fundamentalism, many rising to positions of great influence. While Saudi Arabia actively prevented the formation of a home-grown branch of the Muslim Brotherhood, it encouraged and financed the movement abroad in other Arab countries. One of the most prominent leaders of the Western-backed Afghan Jihad (1979–89) was a Cairo-educated Muslim Brotherhood member: Burhanuddin Rabbani, head of Jamaat-i-Islami ( JI). America and, to a lesser extent, Britain fretted about the rise of communism, which was perceived and portrayed as the “enemy of freedom”—a term that would later be applied to the Islamic extremists. In geopolitical terms, by the end of the Second World War, the Soviet Union comprised one-sixth of the world’s land mass and was a superpower capable of mounting a devastating challenge to the United States. The White House was also concerned about the future alignment of China, where the Chinese Communist Party had seized power in 1949. Communism was enthusiastically embraced by millions of idealistic post-war Americans and Europeans, posing a perceived domestic political threat. Meanwhile the West observed with horror the increasing popularity of communism and socialism in the Middle East; revolutionary, pro-Soviet, Arab regimes would create an enormous strategic disadvantage and threaten oil security. For the West, radical Islam represented the best way to counter the encroachment of Arab nationalism communism. Following the Six-Day War in 1967, US and UK governmental planners noted with satisfaction that Arab unity and sense of a shared cause were finding expression in a revival of Islamic fundamentalism and widespread calls for the implementation of Sharia law. This revival continued through the 1970s and, by the end of the decade, produced the pan-Arab mujahideen that would battle the Soviet armies in Afghanistan for the next ten years. As in Syria and Iraq, the Sunni jihadists were not alone in the insurgency. There were seven major Sunni groups, armed and funded (to the tune of $6 billion) by the United States and Saudi Arabia, as well as the UK, Pakistan, and China. Abdullah Azzam’s Maktab al-Khidamat (the Services Office), which included bin Laden and from which al Qaeda would emerge, was at this point only a sub-group of one of these, the Gulbuddin faction (founded in 1977 by Gulbuddin Hekmatyar). Often overlooked in retelling the story of this particular Afghan war is the fact that the insurgency was pan-Islamic: there were eight Shi‘i groups, trained and funded by Iran. Of the Sunni entities it was backing, the CIA preferred the Afghan-Arabs (as the foreign fighters from Arab countries came to be known) because they found them “easier to read” than their indigenous counterparts. In 2003, Australian-British journalist John Pilger conducted research and concluded, “More than 100,000 Islamic militants were trained in Pakistan between 1986 and 1992, in camps overseen by the CIA and MI6, with the SAS training future al Qaeda and Taliban fighters in bomb-making and other black arts. Their leaders were trained at a CIA camp in Virginia.” That Western interference in Afghanistan actually precedes the Soviet invasion by several months is rarely acknowledged. In the context of this book it is worth tracing the motives and methods employed by foreign powers to further their own ends in that territory, as these have been repeated and modified in Iraq and Syria. Afghanistan’s location and long borders with Iran and Pakistan make it a strategic prize, and rival powers have often fought to control it. A coup in 1978 (the third in five years) brought the pro-Soviet Muhammad Taraki to power, setting off alarm bells in Islamabad, Washington, London, and Riyadh. The Pakistani ISI first tried to foment an Islamist uprising, but this failed owing to lack of popular support. Next, five months before the Soviet invasion, President Jimmy Carter sent covert aid to Islamist opposition groups with the help of Pakistan and Saudi Arabia. Carter’s National Security Advisor, Zbigniew Brzezinski, wrote in a memo to his boss that if the Islamists rose up it would “induce a Soviet military intervention, likely to fail, and give the USSR its own Vietnam.” Another coup in September 1979 brought Deputy Prime Minister Hafizullah Amin to power; Moscow invaded in December, killing Amin and replacing him with its own man, Babrak Karmal. Brzezinski then sent Carter a memo outlining his advised strategy: “We should concert with Islamic countries both a propaganda campaign and a covert action campaign to help the rebels.” On December 18, 1979, British Prime Minister Margaret Thatcher enthusiastically endorsed Washington’s approach at a meeting of the Foreign Policy Association in New York, even praising the Iranian Revolution and concluding, “The Middle East is an area where we have much at stake. . . . It is in our own interest that they build on their own deep, religious traditions. We do not wish to see them succumb to the fraudulent appeal of imported Marxism.” Because IS is a product of Western interference in Iraq and Syria, none of the powers that backed the Afghan mujahideen anticipated the emergence of alQaeda, with its vehemently anti-Western agenda and ambition to re-establish the caliphate. Pakistan’s President Pervez Musharraf wrote in his autobiography, “Neither Pakistan nor the US realized what Osama bin Laden would do with the organization we had all allowed him to establish.” Defining Extremism: The Western Dilemma In the course of the 1990s, radical political Islam became more extremist—a shift that was encouraged and funded by Saudi Arabia. The star of the Muslim Brotherhood began to wane as its leaders were castigated for being too “moderate” and for participating in the democratic process in Egypt; standing as “independents” (since the Muslim Brotherhood was banned), its candidates fared well, becoming the main opposition force to President Hosni Mubarak. There was another reason for the Muslim Brotherhood falling out of favor with Riyadh—it had supported Saddam Hussein’s 1990 invasion of Kuwait. The House of Saud now linked its survival with the rise of the Salafi-jihadist tendency, which was consistent with its own custom-fit Wahhabi ideology. The West viewed this shift into a more radical gear with some alarm as the Salafists’ battle became international: Arab jihadists traveled to Eastern Europe to fight with the Bosnian Muslims from 1992; New York’s World Trade Center was first bombed by radical Islamists in 1993; and in 1995, North African jihadists from the al Qaeda–linked GIA (Armed Islamic Group, Algeria) planted bombs on the Paris Metro, killing 8 and injuring more than 100. The United States and UK adopted a remarkably laid-back approach to this new wave of radical Islam. The UK government and security services did not consider that the extremists presented a real danger, allowing the establishment of what the media labeled “Londonistan” through the 1990s. It could be argued that this was a successful arrangement in that, in return for being allowed to live in the British capital and go about their business in peace, the jihadists did not commit any act of violence on British streets. The Syrian jihadist Abu Musab al-Suri (aka Setmariam Nasar) was a leading light among the Londonistan jihadist community, which also included Osama bin Laden’s so-called ambassador to London, Khalid al-Fawwaz. Al-Suri confirmed to me that a tacit covenant was in place between M16 and the extremists. Saudi entities and individuals funded al Qaeda and other violent Salafist groups to the tune of $300 million through the 1990s, and the United States and UK remained stalwartly supportive. A year after Margaret Thatcher left parliament for good, she told a 1993 meeting of the Chatham House international affairs think tank, “The Kingdom of Saudi Arabia is a strong force for moderation and stability on the world stage.” When challenged on Riyadh’s appalling human rights record—which included (and still includes) public executions, floggings, stonings, oppression of women, the incarceration of peaceful dissidents, and violent dispersal of any kind of demonstration—she retorted, “I have no intention of meddling in its internal affairs.” Later, Tony Blair would talk of the Middle East’s Axis of Moderation, meaning Saudi Arabia, the Gulf States, Turkey, the Palestinian Authority, and Israel. The First Gulf War brought two changes into play. The first was that Saudi Arabia now became completely dependent, militarily, on the United States for its survival. The second was that, in an attempt to weaken Saddam Hussein, the CIA encouraged Shi‘i groups in southern Iraq to rebel, resulting in thousands of Shi‘a being slaughtered by regime helicopter fire. George H. W. Bush spent $40 million on clandestine operations in Iraq, flying Shi‘i and Kurdish leaders to Saudi Arabia for training, and creating and funding two opposition groups: the Iraqi National Accord, led by Iyad Alawi (who would collaborate in a failed coup plotted by the CIA’s Iraq Operations Group in 1996) and the Iraqi National Congress, led by Ahmad Chalabi (who was close to Dick Cheney when he was Defense Secretary). And yet, for the next twelve years, Saddam Hussein remained in power despite the punitive sanctions regime. Washington and London continued to believe that an alliance with “moderate” Islam was key to defeating the extremists. A 2004 Whitehall paper by former UK Ambassador to Damascus Basil Eastwood and Richard Murphy, who had been assistant secretary of state under Reagan, noted: “In the Arab Middle East, the awkward truth is that the most significant movements which enjoy popular support are those associated with political Islam.” For the first time, they identified two distinct groups within the political Islamists: those “who seek change but do not advocate violence to overthrow regimes, and the Jihadists . . . who do.” This new paradigm gained traction. In 2006, Tony Blair made it clear that the coming fight in the Middle East would be between the moderate Islamists and the extremists. The West, he told an audience in the World Affairs Council in Los Angeles, should seek to “empower” the moderates. “We want moderate, mainstream Islam to triumph over reactionary Islam.” Blair enlarged on the economic benefits this would accrue to the large transnational enterprises and organizations he championed: “A victory for the moderates means an Islam that is open: open to globalization.” The West continues to behave as if Saudi Arabia can deliver the world from the menace of extremism. Yet the kingdom has spent $50 billion promoting Wahhabism around the world, and most of the funding for al Qaeda—amounting to billions of dollars—still comes from private individuals and organizations in Saudi Arabia. The Sinjar Records (documents captured in Iraq by coalition forces in 2007) provided a clear picture of where foreign jihadists were coming from: Saudi nationals accounted for 45 percent of foreign fighters in Iraq. They swell the ranks of IS today. The Arab revolutions muddied the waters even more, particularly in Libya and Syria, making it almost impossible to distinguish between moderates and extremists. In Libya the West’s intervention strengthened the radicals and liberated stockpiles of Gaddafi’s sophisticated weapons, which were immediately spirited away by the truckload to jihadist strongholds. In the light of that error, President Obama dithered in Syria, much to the fury of his Saudi allies, allowing the most radical of the extremists to prevail: Islamic State. Excerpted from "Islamic State: The Digital Caliphate" by Abdel Bari Atwan. Published by the University of California Press. Copyright © 2015 by Abdel Bari Atwan. Reprinted with permission of the publisher. All rights reserved.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2015 08:59

Bernie Sanders to Larry David: Come join on me campaign trail!

Bernie Sanders has a pretty good sense of humor. He responded to Larry David's "Saturday Night Live" impression of him by telling George Stephanopoulos on "This Week" that he'd like to take him out campaigning with him. “I think we'll use Larry at our next rally. He does better than I do,” Sanders said. "He seems to have nailed you!" Stephanopoulos marvels. Watch "SNL's" brilliant parody of the first Democratic debate here. And watch Sanders on "This Week" here.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2015 08:04

October 17, 2015

Margaret Atwood on our real-life dystopia: “What really worries me is creeping dictatorship”

The down-on-their-luck protagonists of Margaret Atwood’s new novel "The Heart Goes Lastbecome fed up with living out of their car, so they move to a for-profit prison. It’s the near future, shortly after a new financial collapse, and Positron/Consilience — a gated community and a jail all in one — offers Charmaine and Stan the security of a comfortable middle-class existence, every other month; the inhabitants take turns being jailers living in houses and prisoners in cells. This being a Margaret Atwood novel, things don’t work out quite the way poor Stan and Charmaine hope, but the author of the dystopias "The Handmaid’s Tale" and the Maddaddam trilogy ("Oryx and Crake," "The Year of the Flood" and 2013’s "Maddaddam") insists she has just tweaked what’s already happening in the world, including forced labor in prisons and the erosion of civil liberties — what she sees as the “creeping dictatorship” at home in Canada under Prime Minister Stephen Harper, who’s up for election on Monday. On the phone from a Brooklyn hotel, the ever-outspoken Atwood spoke with Salon about dystopias, robot sex and beer — in fiction and reality. You’re on the road at the moment — I guess there are limits to the use of the LongPen. I’ve hardly touched ground in Toronto for a month. The LongPen [a remote autograph device which Atwood invented in 2004, in part to relieve authors from heavy touring] has gone over into business and banking … There was a period when people were saying everything was going to be digital, but apparently it isn’t. Another of your ventures, Noobroo, the fundraising beer inspired by the brewing in Maddaddam, is on shelves now in Ontario. Were you involved in the beer’s creative process? Yeah, [the brewers, Beau’s] came over to our house with little bottles of powdered stuff, and then they had made a tea out of each of them, and then a blend. One of those ingredients tasted like old running shoes, but funnily enough, the blend that they made tasted better with the old running shoe thing than without it — it gave it a solidity.  In the ‘70s when we [Atwood and her partner, Graeme Gibson] lived on a farm, we made all different kinds of wine; we made beer. I think the biggest failure was the ginger beer. We let it go a little too long, and then we took the top off; the entire contents shot out like fireworks! Having written dystopias before, it seems you’ve outdone yourself with "The Heart Goes Last": It features a dystopia within a dystopia. It’s true — the outer dystopia is the thugs and living in your car; the inner dystopia is the Consilience/Positron project, so there are layers of utopianism and dystopianism, sort of like an Easter egg. Despite the darkness, there’s a lot of humour in the book. It is one of those kinds of literary constructions like "A Midsummer Night’s Dream" in which it’s funny for those watching but not for those to whom it’s actually happening, and indeed when you come to think of comedy itself, that is very often true. We laugh at others’ misfortunes. We do to a certain extent. If a person slips on a banana peel, it’s funny. If they slip on the banana peel and break their neck, then it isn’t. In the Maddaddam books, a pandemic wipes out so much of humanity; you carefully set out the details, whereas in "The Heart Goes Last," the reason for society’s collapse is rather vague. I think we pretty much do know what it was — it’s the same thing that happened in 2008, so it’s a financial collapse rather than a physical [one]. People did end up on their front lawns and living in their cars, and that is apparently ongoing. Do you see Positron/Consilience as a logical extension of current for-profit prisons? The problem with for-profit prisons is that you need an endless supply of prisoners to make it profitable, so there’s no incentive to make it such that criminality is actually reduced. Ultimately you want more criminality; at the very least, you want to be able to define criminality in such a way that enough people get put in prison so you can make a profit out of them. There’s also a clause in the U.S. constitution that says you can’t use slave labor — except when convicted criminals are involved. So all of that is going on right now; [the book offers] just a little twist on it. You write about people who have the power to change others’ lives by wiping out and changing data, which seems a relatively new development in literature — Forgery is very old. Think of it as a new form of forgery. What you’re doing is altering perceived reality and substituting a false version of it, and that can have good uses. For instance, a lot of people wouldn’t have escaped Nazi Germany if there hadn’t been good forgers. So a world in which nothing like this could ever happen would be really claustrophobic. It’s like any human tool: there’s potential good uses and bad uses. It depends on who’s got the power and what you think of those people. Recently, a political column you wrote for the National Post disappeared from the website and then reappeared, minus some cutting comments about Stephen Harper. What happened? Beats me! [laughs.] I can only suppose that someone at a higher level wanted it gone, and they were very foolish because they evidently didn’t understand how the internet works and how quickly people would notice [the disappearance]. And then, of course, people started parsing the missing pieces. The very things that they had wanted to conceal got a lot more attention than the other ones would have done. If they’d left [the piece] alone, it would have just blown by on the breeze. It wasn’t exactly a deep, dark, heavy, important piece of commentary. It was about [politicians’] hair. [Laughs.] On one hand, I felt that I was in an Orwell novel, but on the other hand, I felt I was in some kind of zany comedy. On Twitter, you’ve been promoting electing “Anyone but Harper” in the Canadian election. At this point, do you feel any degree of confidence? I don’t know. … Having been born in ’39, what really worries me is creeping dictatorship. Bill C51 [the Antiterrorism Act, which legitimizes some forms of torture] + C24 [which allows for the revocation of Canadian citizenship] is just a recipe for that. Harper has been speaking out against the wearing of the niqab We’re about to be inundated by a horde of face-covering, dangerous females. Like, as if. And he’s positioning himself as the one candidate who can stop this. [laughs.] Yeah, well, if people are stupid enough to let themselves be manipulated that way, then they deserve everything they get. … Harper kicked off years ago on the Sikh turban issue: he was all upset about Mounties wearing turbans. What is it about stuff on people’s heads? Turbans, niqabs, [Harper’s political opponent] Justin Trudeau’s hair … It sounds as if you view another Harper win as a dystopia waiting to happen. Well you know, once you start releasing big buckets of hate on one group, it means open season on anyone, really, so the question to ask is, who’s next? At one point in "The Heart Goes Last," Stan works at a warehouse that makes sex robots. Did you write about robots out of an interest in the technology, or as a means to explore the concept of free will? I think they’re connected. If you follow the artificial intelligence debate, and in fact dating back to the original "R.U.R." or Isaac Asimov’s "I, Robot," it always comes into the picture: At what point does an artificially created thing have free will? Or to flip it around, at what edge does our own free will cease to be plausible? I don’t know whether you’ve got up with Pepper robot: Pepper has apparently got the ability to read your emotions. Do we want that? I’m not sure. They’ve deployed Pepper in the East as a greeter, so he greets you as you come into the bank or whatever. And then they put some Peppers on sale for private use, and they put on a notification saying, “You shouldn’t have sex with Pepper.” Perhaps there could be a dangerous malfunction during sex, such as you describe in the novel — Well, if you look at Pepper, you’ll see it’s practically impossible: Pepper looks like a chess pawn on a rolling stand. It might depend on the dimensions of the chess pawn. No, the chess pawn is quite large. One of the epigraphs to the book is a Gizmodo review of sex toys. Well, it was a sofa. Who got that into their heads that would be a good idea? It was part of the research [for the book]. I was a bit surprised — I wasn’t expecting to find a sofa. So many modern-day literary dystopias, such as "The Hunger Games," offer us protagonists who are heroic, or at least represent humanity’s resourceful potential to overcome tyranny. "The Heart Goes Last" doesn’t do this … They’re romances. ["The Heart Goes Last"] is a comedy. In a romance, good is good; bad is bad. In comedy, people make mistakes, but it comes out OK anyway. In tragedy, people make mistakes, and it doesn’t. … In Shakespeare’s late plays, he takes all of the motifs he was using in tragedy earlier and makes them come out all right. You are in fact rewriting "The Tempest," are you not? Yes, that’s up next. That is one of those very plays in which he does that. It’s interesting how that island is a paradise for some and a hellhole to others. I’m not setting that in the future. Breathe a sigh of relief.The down-on-their-luck protagonists of Margaret Atwood’s new novel "The Heart Goes Lastbecome fed up with living out of their car, so they move to a for-profit prison. It’s the near future, shortly after a new financial collapse, and Positron/Consilience — a gated community and a jail all in one — offers Charmaine and Stan the security of a comfortable middle-class existence, every other month; the inhabitants take turns being jailers living in houses and prisoners in cells. This being a Margaret Atwood novel, things don’t work out quite the way poor Stan and Charmaine hope, but the author of the dystopias "The Handmaid’s Tale" and the Maddaddam trilogy ("Oryx and Crake," "The Year of the Flood" and 2013’s "Maddaddam") insists she has just tweaked what’s already happening in the world, including forced labor in prisons and the erosion of civil liberties — what she sees as the “creeping dictatorship” at home in Canada under Prime Minister Stephen Harper, who’s up for election on Monday. On the phone from a Brooklyn hotel, the ever-outspoken Atwood spoke with Salon about dystopias, robot sex and beer — in fiction and reality. You’re on the road at the moment — I guess there are limits to the use of the LongPen. I’ve hardly touched ground in Toronto for a month. The LongPen [a remote autograph device which Atwood invented in 2004, in part to relieve authors from heavy touring] has gone over into business and banking … There was a period when people were saying everything was going to be digital, but apparently it isn’t. Another of your ventures, Noobroo, the fundraising beer inspired by the brewing in Maddaddam, is on shelves now in Ontario. Were you involved in the beer’s creative process? Yeah, [the brewers, Beau’s] came over to our house with little bottles of powdered stuff, and then they had made a tea out of each of them, and then a blend. One of those ingredients tasted like old running shoes, but funnily enough, the blend that they made tasted better with the old running shoe thing than without it — it gave it a solidity.  In the ‘70s when we [Atwood and her partner, Graeme Gibson] lived on a farm, we made all different kinds of wine; we made beer. I think the biggest failure was the ginger beer. We let it go a little too long, and then we took the top off; the entire contents shot out like fireworks! Having written dystopias before, it seems you’ve outdone yourself with "The Heart Goes Last": It features a dystopia within a dystopia. It’s true — the outer dystopia is the thugs and living in your car; the inner dystopia is the Consilience/Positron project, so there are layers of utopianism and dystopianism, sort of like an Easter egg. Despite the darkness, there’s a lot of humour in the book. It is one of those kinds of literary constructions like "A Midsummer Night’s Dream" in which it’s funny for those watching but not for those to whom it’s actually happening, and indeed when you come to think of comedy itself, that is very often true. We laugh at others’ misfortunes. We do to a certain extent. If a person slips on a banana peel, it’s funny. If they slip on the banana peel and break their neck, then it isn’t. In the Maddaddam books, a pandemic wipes out so much of humanity; you carefully set out the details, whereas in "The Heart Goes Last," the reason for society’s collapse is rather vague. I think we pretty much do know what it was — it’s the same thing that happened in 2008, so it’s a financial collapse rather than a physical [one]. People did end up on their front lawns and living in their cars, and that is apparently ongoing. Do you see Positron/Consilience as a logical extension of current for-profit prisons? The problem with for-profit prisons is that you need an endless supply of prisoners to make it profitable, so there’s no incentive to make it such that criminality is actually reduced. Ultimately you want more criminality; at the very least, you want to be able to define criminality in such a way that enough people get put in prison so you can make a profit out of them. There’s also a clause in the U.S. constitution that says you can’t use slave labor — except when convicted criminals are involved. So all of that is going on right now; [the book offers] just a little twist on it. You write about people who have the power to change others’ lives by wiping out and changing data, which seems a relatively new development in literature — Forgery is very old. Think of it as a new form of forgery. What you’re doing is altering perceived reality and substituting a false version of it, and that can have good uses. For instance, a lot of people wouldn’t have escaped Nazi Germany if there hadn’t been good forgers. So a world in which nothing like this could ever happen would be really claustrophobic. It’s like any human tool: there’s potential good uses and bad uses. It depends on who’s got the power and what you think of those people. Recently, a political column you wrote for the National Post disappeared from the website and then reappeared, minus some cutting comments about Stephen Harper. What happened? Beats me! [laughs.] I can only suppose that someone at a higher level wanted it gone, and they were very foolish because they evidently didn’t understand how the internet works and how quickly people would notice [the disappearance]. And then, of course, people started parsing the missing pieces. The very things that they had wanted to conceal got a lot more attention than the other ones would have done. If they’d left [the piece] alone, it would have just blown by on the breeze. It wasn’t exactly a deep, dark, heavy, important piece of commentary. It was about [politicians’] hair. [Laughs.] On one hand, I felt that I was in an Orwell novel, but on the other hand, I felt I was in some kind of zany comedy. On Twitter, you’ve been promoting electing “Anyone but Harper” in the Canadian election. At this point, do you feel any degree of confidence? I don’t know. … Having been born in ’39, what really worries me is creeping dictatorship. Bill C51 [the Antiterrorism Act, which legitimizes some forms of torture] + C24 [which allows for the revocation of Canadian citizenship] is just a recipe for that. Harper has been speaking out against the wearing of the niqab We’re about to be inundated by a horde of face-covering, dangerous females. Like, as if. And he’s positioning himself as the one candidate who can stop this. [laughs.] Yeah, well, if people are stupid enough to let themselves be manipulated that way, then they deserve everything they get. … Harper kicked off years ago on the Sikh turban issue: he was all upset about Mounties wearing turbans. What is it about stuff on people’s heads? Turbans, niqabs, [Harper’s political opponent] Justin Trudeau’s hair … It sounds as if you view another Harper win as a dystopia waiting to happen. Well you know, once you start releasing big buckets of hate on one group, it means open season on anyone, really, so the question to ask is, who’s next? At one point in "The Heart Goes Last," Stan works at a warehouse that makes sex robots. Did you write about robots out of an interest in the technology, or as a means to explore the concept of free will? I think they’re connected. If you follow the artificial intelligence debate, and in fact dating back to the original "R.U.R." or Isaac Asimov’s "I, Robot," it always comes into the picture: At what point does an artificially created thing have free will? Or to flip it around, at what edge does our own free will cease to be plausible? I don’t know whether you’ve got up with Pepper robot: Pepper has apparently got the ability to read your emotions. Do we want that? I’m not sure. They’ve deployed Pepper in the East as a greeter, so he greets you as you come into the bank or whatever. And then they put some Peppers on sale for private use, and they put on a notification saying, “You shouldn’t have sex with Pepper.” Perhaps there could be a dangerous malfunction during sex, such as you describe in the novel — Well, if you look at Pepper, you’ll see it’s practically impossible: Pepper looks like a chess pawn on a rolling stand. It might depend on the dimensions of the chess pawn. No, the chess pawn is quite large. One of the epigraphs to the book is a Gizmodo review of sex toys. Well, it was a sofa. Who got that into their heads that would be a good idea? It was part of the research [for the book]. I was a bit surprised — I wasn’t expecting to find a sofa. So many modern-day literary dystopias, such as "The Hunger Games," offer us protagonists who are heroic, or at least represent humanity’s resourceful potential to overcome tyranny. "The Heart Goes Last" doesn’t do this … They’re romances. ["The Heart Goes Last"] is a comedy. In a romance, good is good; bad is bad. In comedy, people make mistakes, but it comes out OK anyway. In tragedy, people make mistakes, and it doesn’t. … In Shakespeare’s late plays, he takes all of the motifs he was using in tragedy earlier and makes them come out all right. You are in fact rewriting "The Tempest," are you not? Yes, that’s up next. That is one of those very plays in which he does that. It’s interesting how that island is a paradise for some and a hellhole to others. I’m not setting that in the future. Breathe a sigh of relief.The down-on-their-luck protagonists of Margaret Atwood’s new novel "The Heart Goes Lastbecome fed up with living out of their car, so they move to a for-profit prison. It’s the near future, shortly after a new financial collapse, and Positron/Consilience — a gated community and a jail all in one — offers Charmaine and Stan the security of a comfortable middle-class existence, every other month; the inhabitants take turns being jailers living in houses and prisoners in cells. This being a Margaret Atwood novel, things don’t work out quite the way poor Stan and Charmaine hope, but the author of the dystopias "The Handmaid’s Tale" and the Maddaddam trilogy ("Oryx and Crake," "The Year of the Flood" and 2013’s "Maddaddam") insists she has just tweaked what’s already happening in the world, including forced labor in prisons and the erosion of civil liberties — what she sees as the “creeping dictatorship” at home in Canada under Prime Minister Stephen Harper, who’s up for election on Monday. On the phone from a Brooklyn hotel, the ever-outspoken Atwood spoke with Salon about dystopias, robot sex and beer — in fiction and reality. You’re on the road at the moment — I guess there are limits to the use of the LongPen. I’ve hardly touched ground in Toronto for a month. The LongPen [a remote autograph device which Atwood invented in 2004, in part to relieve authors from heavy touring] has gone over into business and banking … There was a period when people were saying everything was going to be digital, but apparently it isn’t. Another of your ventures, Noobroo, the fundraising beer inspired by the brewing in Maddaddam, is on shelves now in Ontario. Were you involved in the beer’s creative process? Yeah, [the brewers, Beau’s] came over to our house with little bottles of powdered stuff, and then they had made a tea out of each of them, and then a blend. One of those ingredients tasted like old running shoes, but funnily enough, the blend that they made tasted better with the old running shoe thing than without it — it gave it a solidity.  In the ‘70s when we [Atwood and her partner, Graeme Gibson] lived on a farm, we made all different kinds of wine; we made beer. I think the biggest failure was the ginger beer. We let it go a little too long, and then we took the top off; the entire contents shot out like fireworks! Having written dystopias before, it seems you’ve outdone yourself with "The Heart Goes Last": It features a dystopia within a dystopia. It’s true — the outer dystopia is the thugs and living in your car; the inner dystopia is the Consilience/Positron project, so there are layers of utopianism and dystopianism, sort of like an Easter egg. Despite the darkness, there’s a lot of humour in the book. It is one of those kinds of literary constructions like "A Midsummer Night’s Dream" in which it’s funny for those watching but not for those to whom it’s actually happening, and indeed when you come to think of comedy itself, that is very often true. We laugh at others’ misfortunes. We do to a certain extent. If a person slips on a banana peel, it’s funny. If they slip on the banana peel and break their neck, then it isn’t. In the Maddaddam books, a pandemic wipes out so much of humanity; you carefully set out the details, whereas in "The Heart Goes Last," the reason for society’s collapse is rather vague. I think we pretty much do know what it was — it’s the same thing that happened in 2008, so it’s a financial collapse rather than a physical [one]. People did end up on their front lawns and living in their cars, and that is apparently ongoing. Do you see Positron/Consilience as a logical extension of current for-profit prisons? The problem with for-profit prisons is that you need an endless supply of prisoners to make it profitable, so there’s no incentive to make it such that criminality is actually reduced. Ultimately you want more criminality; at the very least, you want to be able to define criminality in such a way that enough people get put in prison so you can make a profit out of them. There’s also a clause in the U.S. constitution that says you can’t use slave labor — except when convicted criminals are involved. So all of that is going on right now; [the book offers] just a little twist on it. You write about people who have the power to change others’ lives by wiping out and changing data, which seems a relatively new development in literature — Forgery is very old. Think of it as a new form of forgery. What you’re doing is altering perceived reality and substituting a false version of it, and that can have good uses. For instance, a lot of people wouldn’t have escaped Nazi Germany if there hadn’t been good forgers. So a world in which nothing like this could ever happen would be really claustrophobic. It’s like any human tool: there’s potential good uses and bad uses. It depends on who’s got the power and what you think of those people. Recently, a political column you wrote for the National Post disappeared from the website and then reappeared, minus some cutting comments about Stephen Harper. What happened? Beats me! [laughs.] I can only suppose that someone at a higher level wanted it gone, and they were very foolish because they evidently didn’t understand how the internet works and how quickly people would notice [the disappearance]. And then, of course, people started parsing the missing pieces. The very things that they had wanted to conceal got a lot more attention than the other ones would have done. If they’d left [the piece] alone, it would have just blown by on the breeze. It wasn’t exactly a deep, dark, heavy, important piece of commentary. It was about [politicians’] hair. [Laughs.] On one hand, I felt that I was in an Orwell novel, but on the other hand, I felt I was in some kind of zany comedy. On Twitter, you’ve been promoting electing “Anyone but Harper” in the Canadian election. At this point, do you feel any degree of confidence? I don’t know. … Having been born in ’39, what really worries me is creeping dictatorship. Bill C51 [the Antiterrorism Act, which legitimizes some forms of torture] + C24 [which allows for the revocation of Canadian citizenship] is just a recipe for that. Harper has been speaking out against the wearing of the niqab We’re about to be inundated by a horde of face-covering, dangerous females. Like, as if. And he’s positioning himself as the one candidate who can stop this. [laughs.] Yeah, well, if people are stupid enough to let themselves be manipulated that way, then they deserve everything they get. … Harper kicked off years ago on the Sikh turban issue: he was all upset about Mounties wearing turbans. What is it about stuff on people’s heads? Turbans, niqabs, [Harper’s political opponent] Justin Trudeau’s hair … It sounds as if you view another Harper win as a dystopia waiting to happen. Well you know, once you start releasing big buckets of hate on one group, it means open season on anyone, really, so the question to ask is, who’s next? At one point in "The Heart Goes Last," Stan works at a warehouse that makes sex robots. Did you write about robots out of an interest in the technology, or as a means to explore the concept of free will? I think they’re connected. If you follow the artificial intelligence debate, and in fact dating back to the original "R.U.R." or Isaac Asimov’s "I, Robot," it always comes into the picture: At what point does an artificially created thing have free will? Or to flip it around, at what edge does our own free will cease to be plausible? I don’t know whether you’ve got up with Pepper robot: Pepper has apparently got the ability to read your emotions. Do we want that? I’m not sure. They’ve deployed Pepper in the East as a greeter, so he greets you as you come into the bank or whatever. And then they put some Peppers on sale for private use, and they put on a notification saying, “You shouldn’t have sex with Pepper.” Perhaps there could be a dangerous malfunction during sex, such as you describe in the novel — Well, if you look at Pepper, you’ll see it’s practically impossible: Pepper looks like a chess pawn on a rolling stand. It might depend on the dimensions of the chess pawn. No, the chess pawn is quite large. One of the epigraphs to the book is a Gizmodo review of sex toys. Well, it was a sofa. Who got that into their heads that would be a good idea? It was part of the research [for the book]. I was a bit surprised — I wasn’t expecting to find a sofa. So many modern-day literary dystopias, such as "The Hunger Games," offer us protagonists who are heroic, or at least represent humanity’s resourceful potential to overcome tyranny. "The Heart Goes Last" doesn’t do this … They’re romances. ["The Heart Goes Last"] is a comedy. In a romance, good is good; bad is bad. In comedy, people make mistakes, but it comes out OK anyway. In tragedy, people make mistakes, and it doesn’t. … In Shakespeare’s late plays, he takes all of the motifs he was using in tragedy earlier and makes them come out all right. You are in fact rewriting "The Tempest," are you not? Yes, that’s up next. That is one of those very plays in which he does that. It’s interesting how that island is a paradise for some and a hellhole to others. I’m not setting that in the future. Breathe a sigh of relief.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2015 16:00

Since when is Ronda Rousey a role model?

“Rowdy” Ronda Rousey is an incredible fighter, an Olympian, a movie star, maybe even a burgeoning fashonista and — if you go by countless Internet headlines about the UFC women’s bantamweight champion — an icon and a role model. Rousey started fighting in 2011 and quickly captured the attention of MMA fans. She joined the UFC in 2013, but didn’t ascend to international superstardom until recently, in part thanks to roles in "The Expendables 3," "Entourage" and "Furious 7." When she finally did become a crossover hit, it was massive. Not many MMA fighters receive coverage in The New Yorker and The New York Times, nor do they get shoutouts from celebrities like Beyoncé, Chris Pratt, Shaq and Dwayne “The Rock” Johnson. And the people of Brazil don’t start crying upon the sight of other celebrities and sports stars like they did with Rousey. While Rousey is all people say she is (and more) athletically, outside of martial feats, her role model status is questionable. First, Rousey has a history of highly transphobic remarks and flat-out ignorance when it comes to trans athletes. In 2013, when UFC fighter Matt Mitrione made offensive comments about transgender MMA fighter Fallon Fox, calling her a “lying, sick, sociopathic, disgusting freak,” the New York Post asked Rousey for her thoughts on Mitrione’s words. Rousey said he expressed himself “extremely poorly” and that she could “understand the UFC doesn’t want to be associated with views like that.” But in that same interview, Rousey didn’t exactly demonstrate a predilection for sensitivity. “She can try hormones, chop her pecker off, but it’s still the same bone structure a man has,” Rousey said about Fox. “What if she became UFC champion and we had a transgender women’s champion? It’s a very socially difficult situation.” It’s only a “socially difficult” situation thanks to people like Rousey. Earlier this year, Rousey tried softening her words — though not changing her opinion — in an interview with the Huffington Post. “From what I’ve read, it seems if like you’ve already gone through puberty as a man, even if you really want to rid yourself of those physical advantages, I just don’t think science is there yet.” Rousey studied the wrong “science,” because claims of supposed transgender superiority in athletics are categorically false and have zero scientific backing. But her insensitive musings go beyond transmisogyny and transphobia. This summer, Rousey decried what she called a “do-nothing bitch”:
"I have this one term for the kind of woman that my mother raised me to not be and I call it a 'do-nothing bitch.' The kind of chick that just, like, tries to be pretty and be taken care of by somebody else. That's why I think it's hilarious, like, that people like say that my body looks masculine or something like that. I’m just like, listen, just because my body was developed for a purpose other than fucking millionaires doesn’t mean it’s masculine. I think it’s femininely badass as fuck. Because there’s not a single muscle on my body that isn’t for a purpose. Because I’m not a do-nothing bitch."
Being proud of your body is great. Burying other women because they have a different body than yours is not. As Alanna Vagianos noted in the Huffington Post, Rousey’s DNB speech is “certainly not empowering, and it certainly does nothing to combat the large issues that create a society where athletic bodies like Rousey’s are judged as less than.” Despite tone deaf and responsive commentary, Rousey became the darling of the Internet media after the ESPYs. She won the “Best Fighter of the Year” award and called out Floyd Mayweather for his history of domestic violence. “I wonder how Floyd feels being beat by a woman for once,” she said. Rousey didn’t quit her verbal blitz, claiming she made more money per second than Mayweather. The feud spawned billions of “who wins in a fight between Ronda Rousey and Floyd Mayweather???” clickbait takes. Domestic violence is abhorrent. Yet Rousey is currently dating UFC fighter Travis Browne — a man accused of domestic abuse in his previous relationship. In August, his wife Jenna Renee Webb said she’d be pressing charges against Browne. Though a third-party investigation done at the behest of the UFC found “inconclusive evidence” of these claims. It gets worse. In her book “My Fight/Your Fight,” Rousey wrote that her then-boyfriend (she used the pseudonym “Snappers McCreepy”) was taking nude pictures of her without her permission. Her reaction follows:
"I deleted the photos. Then I erased the hard drive. Then I waited for Snappers McCreepy to come home from work. I stood frozen like a statue in his kitchen, getting angrier and angrier. I started cracking my knuckles and clenched my teeth. The longer I waited, the madder I got. Forty-five minutes later, he walked in the door. He saw my face and froze. He asked what was wrong and when I didn’t say anything, he started to cry. I slapped him across the face so hard my hand hurt."
Rousey then wrote that “Snappers McCreepy” begged her to let him explain. She refused. He was blocking the door and wouldn’t move out of the way. So she “punched him in the face with a straight right, then a left hook.” She slapped him again. He still didn’t move, so she “grabbed him by the neck of his hoodie, kneed him in the face, and tossed him aside on the kitchen floor.” Rousey had a right to make Mr. McCreepy move out of her way, and what he did to her was extremely wrong, but the incident seems a bit excessive nonetheless. Personal interactions aside, after the Newtown massacre, Rousey raised eyebrows on Twitter when she retweeted a Sandy Hook truther video. When criticized, she tweeted: “asking questions is more patriotic than blindly accepting what you’re told.” Her manager issued a pathetic non-apology and UFC president Dana White said there was no issue and the real problem was “people are fucking pussies.” Why do we tolerate this from Rousey when other celebrities were sacrificed on the altar of public outrage for much less? It’s an easy answer for the MMA media. They’re largely subservient to the UFC since the UFC issues press credentials. Write the wrong article and you’ll find your website on the outside looking in. The dissident MMA site CagePotato (disclosure: I worked for CagePotato in the past) ran an entire series on the undue influence the UFC has over the MMA media. Two years ago, Deadspin published a leaked memo from Bleacher Report (I worked for Bleacher Report in the past, too) detailing “things you don’t do” when writing about the UFC. This summer, UFC president Dana White suggested on Twitter that he paid USA Today for coverage. But that’s the MMA media – a horde of fanboys and UFC bootlickers. What about the mainstream entertainment  media? What’s their excuse? Maybe the answer is MMA — a sport rife with misogyny and other twisted views — expects less than perfect behavior from its stars; supremacy inside the cage and sordidness outside of it are accepted. And perhaps the mainstream media finds the narrative of an unstoppable, badass warrior woman (who happens to be conventionally attractive and white, because if she wasn’t, they’d probably be writing horrific, offensive articles about her) destroying everything in her path irresistible at a time when feminism is at the forefront of the cultural zeitgeist.

There’s “problematic fave” and then there’s flat-out bad person. The Mary Sue’s Teresa Jusino nailed it when she wrote “Rousey is a hypocrite who flouts gender norms when it suits her, but throws women under the bus when it doesn’t.”

She’s also a transphobe, a body-shamer, and a Sandy Hook truther. A winning combination inside the Octagon, perhaps, but certainly not outside of it. Ronda Rousey isn’t a hero. Ronda Rousey isn’t a role model. Ronda Rousey beats people up in a cage. Let’s not pretend she does more than that.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2015 14:00