Atlantic Monthly Contributors's Blog, page 279

December 9, 2015

This Is Why You Don’t Sell the One Copy of Your Album for $2 Million

Image

The news that Wu-Tang Clan had recorded an album of which there would only be one copy, sold for millions of dollars at auction, with the stipulation that it couldn’t be resold, has inspired debate for more than a year. Some people, like RZA, have argued that the plan is a radical statement on behalf of the value of music and the album format. Others have argued that it’s an insult to fans, a capitulation to traditional ideas about exclusivity and power that rap once railed against, and a demonstration of how capitalism can hurt art.

The debate is now settled. Wu-Tang has made a horrible mistake.

Once Upon a Time in Shaolin​ has been sold to Martin Shkreli, the 32-year-old pharmaceutical executive who triggered outrage worldwide earlier this year when his company increased the price of a drug used to treat some AIDS sufferers by 5000 percent—from $13.50 to $750 a tablet. He recently said he wished he’d raised it more. He appears to have bought this album in hopes of scoring dates, and for now, he does not seem interested in letting the public hear it.

The Bloomberg Businessweek article that broke the news of Shkreli’s winning bidquotes him as saying he has not yet listened to the album, even though the deal—for a rumored $2 million—closed months ago. He did delegate to an employee the task of confirming that all the songs were there. So why buy it? He said he made his final decision once the auction-house representative told him that doing so would give him “the opportunity to rub shoulders with celebrities and rappers who would want to hear it.”

Remedying loneliness with property is an ongoing part of his narrative. When the public anger toward “Pharma Bro” erupted in September, a certain amount of attention went to his dating profiles; our James Hamblin offered a close reading of his OK Cupid page, where Shkreli listed his income at more than a million dollars. The Bloomberg story mentions that on Twitter, he’d joked about buying Katy Perry’s guitar to get a date with her. And it quotes him as saying he could be convinced to listen to Shaolin “if Taylor Swift wants to hear it or something like that.” Barring that, he’s saving it “for a rainy day.”

Maybe he’s just trolling. After the news broke he’d bought the album, he tweeted out a YouTube link: “Live streaming. Talking music, drugs and stuff. May play something special.” In the time I’ve been tuning in, he’s just played emo metal on Spotify while looking into the camera with an expression that could be used to illustrate the word “smug.” Some of the commenters appear to be Wu-Tang fans, begging him to leak Once Upon a Time in Shaolin or at least play a song. “Album is in a vault,” he replied. “I probably won't listen to it for years.” On Twitter, he also wrote, “If there is a curious gap in your favorite artist’s discography, well, now you know why.”

Breaking: On the livestream, he’s started a list of bands he’d hire to make “a record that I would pay the artist to release that would be just for me.”

YouTube

It doesn’t seem that this was the ideal outcome, at least from Wu-Tang Clan’s perspective. “The sale of Once Upon a Time in Shaolin was agreed upon in May, well before Martin Skhreli’s [sic] business practices came to light,” RZA emailed to Businessweek. “We decided to give a significant portion of the proceeds to charity.”











 •  0 comments  •  flag
Share on Twitter
Published on December 09, 2015 08:10

Is the Mystery Behind Bitcoin’s Creator Finally Over?

Image

On Tuesday, Wired and Gizmodo both named a man they said could be one of the people behind Satoshi Nakamoto, the mysterious creator of the virtual currency Bitcoin. On Wednesday, Australian Federal Police raided the home of a Sydney-based technology entrepreneur named in both those stories.

The Australian Broadcasting Corp. reported police raided the home of Craig Steven Wright, 44, on a warrant issued by the Australian Taxation Office. The raid, the broadcaster said, is not believed to be connected to the reports in Wired and Gizmodo that linked Wright to Bitcoin.

The publications reported that Wright developed Bitcoin along with Dave Kleiman, an American computer-forensics expert who died in 2013. The publications both used documents, leaked emails, interviews, and a transcript of a meeting reportedly between Wright and Australian tax officials in order to make the connection.

In the transcript cited by Wired, Wright says: “I did my best to try and hide the fact that I’ve been running bitcoin since 2009. By the end of this I think half the world is going to bloody know.”

The appeal of the bitcoin is that it’s not regulated by a central bank. The currency, which at present is worth about $417 to the dollar, is managed by computers that P2P software. For a more detailed explanation, go here.

Satoshi Nakamoto’s identity has been an Internet obsession ever since Bitcoin came into existence in 2009. There have been several theories about its creator, most recently last year when Newsweek reported—erroneously—that Nakamoto was, in fact, a Dorian Satoshi Nakamoto, a physicist living in California.

Wired acknowledged that it too could be wrong about Wright’s link to the currency, but added: “If Wright is seeking to fake his Nakamoto connection, his hoax would be practically as ambitious as Bitcoin itself.”











 •  0 comments  •  flag
Share on Twitter
Published on December 09, 2015 06:19

December 8, 2015

Smoke 'Em If You Got 'Em and If You're 21

Image

For as long as the uniform drinking age in the United States has been 21—1984, by most accounts— it has been easy to rail against the dynamic where one can be adult enough to go to war, but not to have a drink.

Outlasting that has been the effort to connect the acknowledged dangers of cigarettes with a policy that would keep them out of the hands of younger people. Writing 50 years ago in The Atlantic, Elizabeth Drew noted the extraordinary efforts required to secure a label on packs of cigarettes that simply read Caution: Cigarette Smoking May Be Hazardous to Your Health. That measure came at the expense of more aggressive action planned by the FTC.

But quietly, even as smoking rates have recently declined in the United States, albeit more slowly among lower-income Americans, a movement to raise the legal age to buy cigarettes seems to have caught fire.

On Monday evening, the city council in Cleveland, Ohio, voted to ban the sale of cigarettes (including e-cigarettes) to anyone under 21. The “Metropolis of the Western Reserve” now joins New York City; Paterson, New Jersey; and Hawaii County as places with that age requirement. (A law making 21 the age of sale in all of Hawaii goes into effect next month.)

Cleveland’s new law seems to place the onus on the vendors while other measures focus on both smokers and vendors. Elsewhere, statewide regulations have made Alaska, Alabama, Utah, and New Jersey the few places where the legal smoking age is already 19, though only New Jersey’s law was passed in the last decade.

The more recent campaigns have been buttressed by the CDC, which offers age-based statistics, “Nearly 9 out of 10 cigarette smokers first tried smoking by age 18,” and more specific indictments of tobacco like its status as the leading preventable cause of premature death in the United States. As German Lopez at Vox noted, an Institute of Medicine study earlier this year found that “raising the smoking age to 21 could prevent approximately 223,000 premature deaths among Americans born between 2000 and 2019.”

The legislative machinations of a few cities and states may constitute a slow burn, but lawmakers in California and Washington are also pushing for a minimum age increase; Utah and Colorado may join them. In October, Senate Democrats introduced a bill that would lift the age of sale to 21 on the federal level.

Their efforts were echoed this week in Russia of all places, where a senator introduced a bill that would raise the national smoking age to 21. According to the World Health Organization, the United States is No. 51 on the list of most cigarettes smoked per capita in the world; Russia is No. 4 on that list.











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 12:45

A Home-Shopping Network for Guns

Image

Last week’s shootings in San Bernardino have left many Americans afraid, numb, upset, and angry. But also: eager as ever to buy guns.

While there’s no data to speak to San Bernardino’s nationwide effect, gun retailers across the country are reporting brisk sales. Mass shootings (as well as some elections) have consistently led to spikes in the number of background checks, which are used as a proxy for gun purchases. These spikes arise both because in the wake of a shooting, people may feel the need to protect themselves, but also because they worry that a shooting will prompt gun-control legislation. So: Buy now.

What better way to capitalize on this surging demand than with a 24-hour shopping channel that sells guns?

This question may not have occurred to most people, but it did occur to Valerie Castle and Doug Bornstein, who have made their careers in the home-shopping industry. Their new venture, GunTV (tagline: “Live Shopping. Fully Loaded.”), will debut in January and also sell accessories such as ammo and holsters. Their Palm Springs studio, as The New York Times notes, is only about 50 miles from San Bernardino.

Ordering guns from a TV salesperson will not be quite as seamless as getting a cubic-zirconia engagement ring shipped to one’s doorstep. Instead, once an order is placed on GunTV, the product will get sent to a nearby gun store and can only be retrieved once a background check has come back clear. Additionally, GunTV currently plans to air gun-safety PSAs for eight minutes out of every hour of programming, so viewers will have to sit through those.











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 10:35

A Frontrunner Republicans Will Denounce but Not Reject

Image

The Gordian Knot that Donald Trump has tied around the Republican Party tightened considerably when the presidential frontrunner issued his call for a ban on Muslims entering the United States.

If there was any doubt, that bind became clear within the span of about 10 minutes during a press conference Speaker Paul Ryan held on Tuesday morning with the House Republican leadership in the Capitol. As the GOP’s top elected official in Washington, Ryan has pledged to stay publicly neutral in the party’s primary race. He ended his introductory remarks, however, with a strong denunciation of Trump’s proposal—delivered entirely without uttering the candidate’s name.

“Normally, I do not comment on what’s going on in the presidential election,” Ryan began. “I will take an exception today.”

This is not conservatism. What was proposed yesterday is not what this party stands for, and more importantly, it’s not what this country stands for. Not only are there many Muslims serving in our armed forces, dying for this country, there are Muslims serving right here in the House, working everyday to uphold and to defend the Constitution.

Some of our best and biggest allies in this struggle and fight against radical Islamic terror are Muslims—the vast, vast, vast majority of whom are peaceful, who believe in pluralism, freedom, democracy, individual rights. I told our members this morning to always strive to live up to our highest ideals, those principles in the Constitution on which we swear every two years that we will defend.

Ryan also reminded reporters that when the House voted last month to suspend the Syrian refugee program, he made clear that there should be a security test—not “a religious test”—on people entering the U.S. Freedom of religion, he said, was a “founding constitutional principles.” His aides let it be known that Ryan had told Republican lawmakers in private that Trump’s proposal violated two different amendments in the Bill of Rights: the First and the Fourteenth.

Yet when it came time for questions, Ryan retreated to boilerplate. A reporter asked the obvious: Would Ryan support Trump if he became the Republican nominee for president?

“I’m going to support whoever the Republican nominee is,” the speaker replied, “and I’m going to stand up for what I believe in as I do that.”

In other words, Ryan may believe that Donald Trump is proposing a policy that would violate the Constitution and his oath of office, and would amount to an impeachable offense if he put it into effect, but he’d still support him over (presumably) Hillary Clinton if it came to it. As political logic goes, that’s pretty close to an untenable position, and it’s one that Trump is increasingly forcing both the party establishment and his rivals for the nomination to confront. If he’s so bad, how could you in good conscience support him?

“I’m going to support whoever the Republican nominee is, and I’m going to stand up for what I believe in as I do that.”

My colleague Peter Beinart noted even before Trump’s outlandish call to ban Muslim entry to the U.S. that Republican candidates like Jeb Bush and John Kasich had struggled to answer the same question. Kasich, in particular, has criticized Trump with as much passion as anyone; his supporters have compared him to Hitler, and in response to his proposal yesterday, Kasich said he was “entirely unsuited” to be president. Bush tweeted that Trump was “unhinged.” If he is so bad—unhinged and dangerous—how could you in good conscience support him? The candidates have mostly dodged the question by predicting that Trump won’t win, but the longer he stays atop the polls, the weaker that answer will become. It will likely be a key part of next week’s GOP debate.

Trump’s trump card, of course, is the possibility that he’ll run as a third-party candidate if the GOP treats him unfairly. And in that sense, he’s holding the party hostage. For now, he seems to control a bloc of voters the Republicans need to defeat Clinton, and an independent run could mean a nightmarish repeat of the election of the first President Clinton in 1992, in which another billionaire—Ross Perot—siphoned support and helped Bill win the presidency despite capturing just 43 percent of the popular vote.

For now, Ryan and his colleagues are still trying to counter Trump without pushing him away entirely. But as he dabbles in ever-darker areas of American politics, the tightrope they’re trying to walk may disappear entirely.











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 09:47

The 2016 Oscar Race Is Wide Open

Image

Every year, a handful of smaller, impressive films are released well before awards season, rolling out slowly across the country, and finding mild success at the box office. And nearly every year, the multitudes of critics’ groups around the country congregate around one or two, whether unconsciously or not, and push it to more mainstream success—think Boyhood last year, or Zero Dark Thirty in 2012. But this year is different. Unlike past awards seasons, critics have been refreshingly open, embracing blockbusters and arty dramas alike and celebrating the most diverse batch of winners in decades.

Related Story

Suffering for Art (and Oscars)

A caveat: By “diverse,” I mean only that the three major critics groups named different winners in each of the major categories this year—something that hasn’t happened since 1988. Judging by the major contenders campaigning for Oscars, this will be another depressingly white year for the Academy, though its president Cheryl Boone Isaacs is making an honest push to improve opportunities for people of color in the industry and to add more minority voters to the Academy’s rolls. But what 2015’s awards do underline is a great sense of self-awareness among critics regarding their role in the Oscars process, and as a result they’re trying to draw attention to a wide swathe of cinema rather than congregating around an anointed favorite.

The National Board of Review kicked things off by naming Mad Max: Fury Road its Best Picture and giving acting awards to Matt Damon (The Martian), Brie Larson (Room), Sylvester Stallone (Creed), and Jennifer Jason Leigh (The Hateful Eight). The New York Film Critics Circle countered with Carol, Michael Keaton (Spotlight), Saoirse Ronan (Brooklyn), Mark Rylance (Bridge of Spies), and Kristen Stewart (Clouds of Sils Maria). Last weekend, the Los Angeles Film Critics Association picked Spotlight as its Best Picture, and recognized the performances of Michael Fassbender (Steve Jobs), Charlotte Rampling (45 Years), Michael Shannon (99 Homes), and Alicia Vikander (Ex Machina). Best Director went to Ridley Scott for The Martian, Todd Haynes for Carol, and George Miller for Mad Max, respectively.

In Mad Max: Fury Road, critics are speaking up for a popular summer hit and big-budget franchise reboot that received uncommon praise for its artistry and craft. The Martian is another blockbuster drawing plaudits for elaborating upon a blueprint audiences have been handed a dozen times before. Films like Carol, Room, and Brooklyn are small end-of-year dramas that are more typical critics group fare. Rylance and Fassbender were honored for their work in films that underwhelmed on opening but are now gathering steam, and Los Angeles critics’ awarding Alicia Vikander for the sci-fi thriller Ex Machina served as a mild rebuke to her more conventionally awards friendly work in The Danish Girl, in which she plays a tearful wife (which, unfortunately, is practically the definition of Best Supporting Actress at the Oscars).

Then there are the most idiosyncratic choices. By honoring tiny indie flicks like Clouds of Sils Maria, 99 Homes, and 45 Years, these groups are seeking to revive chatter about worthy but basically forgotten films just in time for Oscar voting, which is the entire point of being a film critic. Rather than just reinforcing tired narratives—there was one year when every group gave half of its major awards to Sideways, in case people didn’t get that it was well-liked by critics—they’re making an effort to highlight choices that can start conversations. They’ve even managed at the same time to acknowledge audience favorites like Stallone, throwing some weight behind a candidate who might otherwise seem like a sentimental choice at best.

Rather than just reinforcing tired narratives, critics are making an effort to highlight choices that can start conversations.

So what of the Oscar race? Things will be somewhat illuminated by the Golden Globe nominations, which are announced Thursday. But that group’s penchant for picking Hollywood superstars, and its division of film nominations into “comedies” and “dramas,” will only reinforce the wide-open nature of the race. Some have pointed out  that the “wide-open” narrative crops up on Oscar prognostication websites every year, a tired cliché journalists lean on until the Globes and various Guild awards bring things into tighter focus. But this year it really is the case: The “frontrunner” narrative has so far failed to take shape, with a relatively even balance of arguments for and against the potential winners.

Steven Spielberg’s Bridge of Spies is well-liked by audiences and making decent money at the box office, but was greeted by many critics as a gloomier, minor-key work. The Martian is a genuine crowd-pleasing hit, but it lacks a weightier theme. Carol got strong reviews but has had the polarizing effect the director Todd Haynes usually achieves. Alejandro Gonzalez Iñárritu’s The Revenant and David O. Russell’s Joy are yet to be released, but both have gotten a mixed advanced reception from critics. This is why so many are touting Spotlight as the film that will sneak up and grab the big prize—though it’s a small, sober film lacking in visual flourishes or attention-grabbing performances, it’s the kind of work that a consensus can eventually form around. In another year, that consensus would have been decided well before February by the people who write about film, but 2015 has made for an exciting change of pace.











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 09:08

Who Is Hiding Behind Mona Lisa?

Image

There’s no rest for the world’s most famous painting. Songs, books, films, and pizza joints have all been named after the Mona Lisa while conspiracies eternally stir over her real identity, the meaning of her enigmatic smile, the position of her fingers, and even the absence of her eyebrows.

Writing in The Atlantic in 1988, Arianna Huffington outlines the little-known subplot involving Pablo Picasso and the 1911 theft of the Da Vinci masterpiece. Matt Yglesias later explains that the work’s purloiner was an Italian nationalist who was angry the painting lived in a French museum.

In the latest intrigue, a French scientist, who has studied the painting for a decade, has concluded that beneath the portrait that many believe to be Lisa Gherardini, the wife of a Florentine silk merchant, is a different woman entirely.

“The results shatter many myths and alter our vision of Leonardo’s masterpiece forever,” Pascal Cotte said in a statement. “When I finished the reconstruction of Lisa Gherardini, I was in front of the portrait and she is totally different to Mona Lisa today. This is not the same woman.”

Using reflective light technology, Cotte says he discovered a woman beneath Mona Lisa who is looking away instead of directly ahead and not smiling. He also cites the presence of other figures with different physical features.

The Louvre, for its part, has declined comment. Other experts are skeptical and argue artists frequently paint over images on their canvases, particularly for commissions. In an interview with the BBC, Martin Kemp, a professor at Oxford, praised Cotte’s “ingenious” images for giving a sense of Da Vinci’s artistic process. “But,” he added, “the idea that there is that picture as it were hiding underneath the surface is untenable.”

So who’s right? The academic community or one renegade scientist? The answer may lie with Da Vinci himself, who once said, “The greatest deception men suffer is from their own opinions.”











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 08:35

What the World Is Saying About Donald Trump’s Comments About Muslims

Image

Updated on December 8 at 11:30 a.m.

Donald Trump, the Republican presidential candidate, doubled down Tuesday on his remarks calling for a “total and complete shutdown of Muslims entering the United States until our country’s representatives can figure out what is going on.”

Appearing on ABC’s Good Morning America, Trump compared his plan to the Japanese internment camps used by President Franklin D. Roosevelt during World War II.

“This is a president highly respected by all, he did the same thing,” Trump said. “If you look at what he was doing, it was far worse. I mean, he was talking about the Germans because we’re at war.

“We are now at war. We have a president that doesn’t want to say that, but we are now at war.”

But Trump rejected the idea that he was advocating the internment of Muslim Americans: “No, I’m not. No, I’m not. No, I’m not,” he said.

Trump’s remarks Monday were in apparent response to the massacre in San Bernardino, in which an Illinois-born man and his Pakistan-born wife killed 14 people and wounded 21 others. The FBI said the couple was radicalized.

Trump’s comments were nearly universally condemned—both in the U.S. and around the world. Here’s a roundup of global reaction:

U.K.

A spokeswoman for British Prime Minister David Cameron called the remarks “divisive, unhelpful and quite simply wrong.”

“The prime minister has been very clear that, as we look at how we tackle extremism and this poisonous ideology, what politicians need to do is look at ways they can bring communities together and make clear that these terrorists are not representative of Islam and indeed what they are doing is a perversion of Islam,” she said.

Egypt

A statement from Dar al-Ifta, the country’s official religious body, called Trump’s remarks “hate rhetoric.”

“Such hostile attitudes towards Islam and Muslims will increase tensions within the American society of which Muslims represent around 8 million peaceful and loyal American citizens,” the organization said.

France

Monsieur Trump, comme d'autres, entretient la haine et les amalgames : notre SEUL ennemi, c'est l'islamisme radical.

— Manuel Valls (@manuelvalls) December 8, 2015

That tweet by the French prime minister translates to: “Mr. Trump, like others, stokes hatred: our ONLY enemy is radical Islamism.”

Global Media

Writing in Haaretz, the left-leaning Israeli newspaper, columnist Chemi Shalev said Trump’s  remarks “must have delighted the Caliph, Abu Bakr al-Baghdadi,” the leader of ISIS.

“For some Jews, the sight of thousands of supporters waving their fists in anger as Trump incited against Muslims and urged a blanket ban on their entry to the United States could have evoked associations with beer halls in Munich a century ago,” he wrote.

The “useful idiot” analogy was also used by Suddeutsch Zeitung, the German newspaper. The Telegraph, the right-leaning British newspaper, posted a quiz headlined, “Who said it: Donald Trump or Adolf Hitler?

Support for Trump’s Remarks

Geert Wilders, the head of the Dutch Freedom Party, a far-right group that is represented in parliament, supported Trump’s call.

I hope @realDonaldTrump will be the next US President. Good for America, good for Europe. We need brave leaders. pic.twitter.com/FWJSaQdClM

— Geert Wilders (@geertwilderspvv) December 7, 2015

Other Reactions

In Pakistan, Asma Jahangir, the prominent human-rights activist, called Trump’s remarks “absurd.”

“This is the worst kind of bigotry mixed with ignorance,” she said. “I would imagine that someone who is hoping to become president of the U.S. doesn’t want to compete with an ignorant criminal-minded mullah of Pakistan who denounces people of other religions.”

Yenny Wahid, an Islamic activist and daughter of Abdurrahman Wahid, the former Indonesian president, told The Guardian: “I think the perspective of people here in Indonesia is that they see Donald Trump as a loser. We don’t really take his comments seriously.”

Trump’s remarks also prompted comparison on social media to Lord Voldemort, the villain in the Harry Potter series—a comparison that drew this response:

How horrible. Voldemort was nowhere near as bad. https://t.co/hFO0XmOpPH

— J.K. Rowling (@jk_rowling) December 8, 2015










 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 07:01

How Taxpayers Keep the NFL Rich

Image

Over three marvelous spring days in 2015, the National Football League threw itself a party otherwise known as the annual draft. The NFL took over Chicago’s storied lakefront Grant Park, where, in November 2008, Barack Obama had addressed a vast throng to accept the presidency of the United States. Some 200,000 football fans, including parents with kids, milled around in Grant Park, temporarily renamed Draft Town. Gigantic letters spelling out “NFL” ran down the side of the world’s largest marble-clad structure, the 83-story Aon Center.

Both Obama’s 2008 acceptance speech and the 2015 NFL draft were huge successes for the civic spirit of the City of Big Shoulders. Both drew enormous crowds and were nationally televised: Both made Chicago seem an exciting, vital place to be. But there was one difference. Obama covered the cost of his event. The NFL was mooching off the taxpayer.

Obama’s campaign committee paid Chicago about $2 million as a rental fee to use Grant Park, and to cover police overtime costs for maintaining order. By the moment he stepped to the microphone on November 4, 2008, Obama was president-elect, and since Obama was appearing in Grant Park both as a hometown hero and as the first African American president, his acceptance speech engaged a clear public interest. But Obama took the high road, believing it would be unfair for city taxpayers to be saddled with the expense.

The NFL’s billionaire owners preferred the low road, arriving in Chicago with their hands out. While negotiating to hold the draft in Chicago, the NFL said it would come only if it were awarded free use of Grant Park. The city waived the $937,000 fee normally charged for large events there—the sole time, the Chicago Tribune reported, a for-profit enterprise received use of Grant Park without a fee. The Auditorium Theater was provided free as well, with taxpayers picking up utility costs for all the lighting and television-transmission facilities. The NFL provided no security deposits against damage to either venue. Had damage occurred, taxpayers would have been left on the hook as NFL owners boarded their private jets to depart.

Wherever the NFL’s “dignitaries” tread, they expect taxpayer-funded special treatment.

The league’s agreement with Chicago specified the city would pay for police overtime, firefighter and ambulance calls, and for any “weather mitigation” necessary, while “the NFL will retain all revenue from tickets and advertising” sold. Chicago gave the NFL the right to close streets in the Loop and along the lakefront, and to remove any signs the league did not like: Essentially Chicago suspended the First Amendment, so protesters couldn’t raise banners that mentioned domestic violence, tax subsidies, or brain harm. As if the House of Romanov were touring to review the peasants, the NFL demanded free stopped-traffic police escorts “in and around the city” for “certain NFL dignitaries.” Certain NFL dignitaries.

The corporate welfare the NFL received in Chicago is just a small slice of a much larger problem with the league, which is highly subsidized by taxpayers and elaborately protected by government. Here, surely, is another way in which America’s biggest sport holds up a mirror to American society: The very rich receive too many publicly funded favors, while celebrities are treated as above the law. Taxpayer support for the NFL draft combined with police escorts for NFL “dignitaries”—dignitaries!—shows both problems.

The league’s primary subsidies flow to construction and operation of stadia. All are at least partially publicly funded: some, entirely so. Judith Grant Long, a professor of sports management at the University of Michigan, estimates that taxpayers provide about 70 percent of the cost of building and operating the fields where NFL teams play. Yet the NFL’s owners keep more than 90 percent of revenue generated at their subsidized facilities, while AT&T, CBS, Comcast/NBC, Disney/ESPN, Fox, Verizon, and Yahoo profit through transmission of the copyrighted NFL images produced in publicly subsidized stadia.

The NFL is on the dole in numerous other respects. Most of the league’s facilities either pay no property taxes (such as Texas’s AT&T Stadium, where the Cowboys perform) or are taxed at a far lower rate than comparable local businesses (such as New Jersey’s MetLife Stadium, where the Giants and Jets cavort). Stadium construction deals often involve significant gifts of land from the public for NFL use (such as Levi’s Stadium in Santa Clara, California, where the “San Francisco” 49ers play).

Hidden costs may include city or county government paying electricity, water, and sewer charges for a stadium (such as First Energy Stadium in Cleveland, where the Browns perform), the city paying for a new electronic scoreboard out of “emergency” funds (ditto First Energy) or the issuance of tax-free bonds that divert investors’ money away from school, road, and mass-transit infrastructure (Hamilton County, Ohio, issued tax-free bonds to fund the stadium where the Cincinnati Bengals play, and has chronic deficits for school and infrastructure needs as a result).

Professional football fields are a capital-investment double whammy—the dearest kind of sports facility to build, then used the least.

Wherever the NFL’s “dignitaries” tread, they expect taxpayer-funded special treatment. The league’s proposal to stage the 2018 Super Bowl at the new publicly funded Minnesota Vikings stadium in Minneapolis included a demand that NFL owners stay in free presidential suites at the best hotels, that the NFL keep all proceeds from ticket sales and that NFL owners and headquarters officials receive police escorts, at public expense, as they move around the city, including to parties. Supposedly the latter was for “security” against “terrorism.” But the owners and top executives of the NFL are private business people. If they desire security beyond what is normally accorded in public places by law enforcement, they could pay for it themselves.

The NFL even accepts subsidies for honoring the U.S. military. Games often are preceded by color guards, or the display of various military banners. This promotes the NFL, not the military, by suggesting professional football somehow is related to national security. The NFL stages an annual “Salute to Service” event during Veterans Day weekend, in which coaches dress up in fatigues as if they were military officers, again trying to create the impression the NFL has some relationship to defense of the nation.

At least the league is showing appreciation to service members, right? If only. In 2015, Senator John McCain of Arizona disclosed that the Pentagon pays the NFL about $2 million per year to stage what appear to be displays of patriotism. Included in 2014 was $675,000 to the New England Patriots to honor National Guard members at halftime: Most other NFL teams received payments for introducing color guards, and for similar bunting-dressed activities. As for that “Salute to Service,” in 2014 the NFL donated $412,500 to wounded-warrior projects, and was lavishly praised by partner networks for doing so. The amount is about one-20th of one percent of the league’s annual public subsidy.

In addition to subsidies to build and operate stadia, and to taxpayers’ money channeled through the defense budget, the NFL enjoys many special forms of government protection: prominently, a congressional waiver that exempts the league from antitrust laws. Basically the waiver allows the NFL to enforce monopoly pricing in television contracts. Without this special deal, the cost of pro-football broadcast rights would fall and cable-carrier fees charged to consumers would go down.

There’s no law of nature that says the NFL, or any professional sport, must be publicly subsidized.

At this writing, competing projects in the near-Los Angeles cities of Carson and Inglewood were angling to land relocating NFL teams. One would be funded by $1.4 billion in public borrowing, with the NFL throwing in a mere $200 million. The other would use bookkeeping hocus-pocus to appear to be privately funded: Taxpayers would be on the hook to repay capital costs to the stadium developer down the road.

Documents supporting the Inglewood plan claim that a $1.9 billion NFL stadium, mostly funded by taxpayers, would cause $3.8 billion in local economic expansion. This “magic multiplier” fails the giggle test. Many studies have shown that for any dollar of civic investment, building roads, bridges, mass transit, and other infrastructure has far more multiplier effect than building NFL fields.

Baseball fields can pass a multiplier test, because they cost so much less than NFL stadia and are used so much more often. Professional football fields are a capital-investment double whammy—the dearest kind of sports facility to build, then used the least. Glendale, Arizona, where the most recent Super Bowl was played, funded most of the stadium in which the Arizona Cardinals perform, after receiving magic-multiplier promises. Today the city has trouble hiring police officers and EMTs because 40 percent of its budget goes to retiring stadium debt. The promised magical economic boom did not occur.

In a 2015 study, Ted Gayer and Alex Gold of the Brookings Institution concluded, “Despite the fact that new stadiums are thought to boost local economic growth and job creation, these benefits are often overstated. Academic studies typically find no discernible positive relationship between sports facility construction and economic development. Most evidence suggests sports subsidies cannot be justified on the grounds of local economic development, income growth, or job creation.”

In most instances of public subsidies for NFL stadia, state and local politicians are the bad guys. Mayors, governors, and county commissioners know that if they oppose giveaways to the NFL, they will be accused of being “against football”—while if they spend the public’s money lavishly, the painful invoices will not fall due until after they have left office.

The whole assumption that taxpayers should support stadia traces to earlier generations when the economics of sports were very different than today. California’s Rose Bowl and Los Angeles Memorial Coliseum were built in the 1920s, at a time when progressive-era politics contended that civic improvement would help bring the country together. Because there was no way to watch games on television, stadia of the period were huge compared to the contemporaneous population—when USC and UCLA faced off at the Los Angeles Memorial Coliseum in 1939 before 103,303 spectators, this was like playing before 1.1 million Californians today.

Football stadia of this era were conceptualized as being akin to libraries and parks: College teams would be performing for the public’s benefit, and tickets would be cheap so anyone could attend. Bearing in mind that all money figures in this book are stated in today’s dollars, 50-yard line seats at the 1939 USC-UCLA contest cost $38. Through the 1930s, football facilities such as Buffalo’s War Memorial Stadium were built in large part to create construction jobs to counter the Depression, and were priced as public goods. When I attended Bills games at War Memorial as a boy, the bleachers were $7 per head, and the best seats in the house were $45.

After the 1930s came World War II, with all economic activity diverted to military materiel. During the 1950s, the primary concern of nearly all levels of government was building housing and highway infrastructure for returning veterans; the secondary concern, expansion of state university systems.

By the 1960s, interest was growing in a new wave of stadia for professional football. There still was relatively little television money in NFL coffers: Most owners could not have afforded to pay for a field on their own, and lacked the clout to arrange financing. Government help was required. This situation—the NFL can’t afford to build a new stadium on its own—persisted through the 1970s or into the 1980s, depending on the franchise in question.

That typical people are taxed to fund NFL facilities, yet only the expense-account set can afford to enter, ought to be a source of populist uproar.

By the current generation, every NFL owner could pay for his or her new stadium, and now capital markets can arrange financing unassisted. Yet the many-decades-old assumption that taxpayers should support football stadium construction continues. Owners take advantage of the belief, on the part of taxpayers, that public money ought to be employed to build or upgrade NFL stadia. In the 21st century, this belief is archaic nonsense. But so long as politicians act as if the assumption were still valid, why should NFL owners dissent? Local pols and civic leaders are more to blame for the situation than NFL owners.

Taxpayer subsidies for NFL fields are offensive both because the league can now afford to pay its own way, and because ordinary people are paying to build facilities so nice that they’re priced out of ever using them. In September 2014, when Fox staged its initial 49ers telecast from the gleaming new Levi’s Stadium, the play-by-play man Joe Buck marveled, “They spared no expense on this stadium.” They being California taxpayers who were watching on TV because they couldn’t afford to come in, with 2014 season tickets selling for $2,850 to $14,000 per seat.

In 2014, the average NFL ticket cost $85, twice the inflation-adjusted cost of a generation ago. Parking at $25, beer at $12, and soft drinks at $9 are common, with nearly all concession revenue kept by NFL owners. Team Marketing Report, a Chicago consultancy, calculated that in 2014 a family of four spent $635 to attend a Dallas Cowboys home game sitting in regular seats, not in a suite or on a premium concourse; $625 for the same family to attend a Patriots home game; $600 for a Washington home game. That typical people are taxed to fund NFL facilities, yet only the expense-account set can afford to enter, ought to be a source of populist uproar.  

There’s no law of nature that says the NFL, or any professional sport, must be publicly subsidized: No law of nature that says every pro-sports franchise owner must be a billionaire. If the NFL paid its own way, income to owners, and salaries for players and coaches, would decline. Owners would still be rich, just not super-rich at public expense. Players and coaches would still be highly compensated, just not instant millionaires. Taxpayers would no longer be fleeced. Less-dazzling amounts of money for a privileged few, combined with a better deal for taxpayers and average fans, would be good for the long-term prospects of the National Football League. (The same reasoning applies to the NBA and MLB.)

Having the NFL pay its own way would lead to a healthier relationship between Americans and their favorite sport. Fans might find they are prouder of their favorite NFL teams than they are today.

This article has been adapted from Gregg Easterbrook’s book, The Game’s Not Over: In Defense of Football.









 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 06:50

Why Fallout 4’s 1950s Satire Falls Flat

Image

Fallout 4 takes players back. Back to the beginning. Back before the bombs fell, and before the world of the Fallout series took on its mutated, feral, apocalyptic form. But what did that world look like?

The Fallout series has, since its inception, hinted at a world before nuclear annihilation that resembled the 1950s in its culture and its design, rather than the 2070s, which is the decade in which Fallout’s “Great War”—a two-hour series of nuclear blasts that decimated the planet—took place. But the series has only ever revealed this in the clues left behind in its various wastelands. That pre-war world, untouched by nuclear fallout, has never been shown, and because of this the series has always managed to uphold an ironic distance between the wastelands the player explores and the past these spaces gesture back to.

But Fallout 4 is different; it begins before the bombs. And allowing the player to be a part of this world, even briefly, breaks this distance down completely. Fallout 4 is the most nostalgic game in the series, pining for its its lost world. You play as someone who lived and loved this old world, somebody who has an emotional attachment to it. But to get the player to feel the same is difficult, and the game doesn’t exactly reconcile this newfound sentiment with its irreverent tone. Fostering an emotional attachment to what has come before is not something that sits easily with the game’s satirical take on that previous world.

More From Our Partners Kill Screen Dogmeat, Fallout 4, and the Anxiety of Everything Fallout 4: Return to Junktown Why Fallout 3 and Deus Ex Are Libertarian Dreamlands

There’s a marked difference as soon as the game’s introductory film begins. Previous games in the Fallout series have begun with a maudlin tune from the ’30s, ’40s, or ’50s, wailing away over a series of grainy images of a destroyed world. After the song ends, the monologue begins, with the video-game-famous intonation, “War. War never changes.” Fallout 4 changes this. For the first time, there is no song, and the opening monologue is spoken by the game’s playable male character. This does two things: Firstly, it sidelines the female character, and presents the male as the default, intended protagonist of the game (an opening speech by both of them could have been interesting); and secondly, it creates a character before the player has had a chance to.

Character creation has always been a key element of Fallout, with the playable character often being a blank state that players fill in for themselves. In the previous game, subtitled New Vegas, players are told the protagonist is a courier; apart from that, no other information is given. In Fallout 3, the game begins at the birth of the protagonist; in that game, the player is absent from no single moment of their life. This opening speech is much more revealing. The playable character is a former soldier. He speaks of his grandfather, of his wife, his child, of the shattering of the American dream. About the fear he feels. The game seems to be using this speech to pull players in emotionally, to feel what the character feels, so that when the bombs start to fall we understand what’s at stake for this family.

Then the film ends, and the game opens up, and we’re plunged into its pre-war world. On a sunny autumnal morning, somewhere in the suburbs surrounding Boston, the player is finally witness to the world before the bombs fell. The house we find ourselves inhabiting, in both its exterior design and its interior decoration, resembles a brand new home of the suburban 1950s; shiny new cars lay dormant outside each bright new house. The opening film has made it clear that the male playable character loves his family very much; it would make sense if this was shown and explored in this scene. It would make sense too for the wife to be given some characterization, for their relationship to be given some depth.

But this doesn’t happen. The sequence gives the player no opportunity to engage in any meaningful way with either spouse or son. Instead, the game seems to utilize 1950s imagery as a visual shorthand; by presenting the traditional nuclear family in their comfortable suburban home, the game is telling the player to assume that they are happy and loving, rather than this being illustrated through in-game actions. This upends the stance previous games in the series took, in which the cultural mores of the 1950s, including the idea of the happy, suburban nuclear family, were plundered for satire. Being part of such a family, and feeling like an emotional attachment is meant to be made to them, sits uneasily with the game’s overall tone and sense of humor, and this is only heightened by the lack of effort the game makes in showing real relationships between these characters.

This begs the question, then: Why the ’50s? Outside of the irony we as players are meant to perceive and enjoy, there’s no reason why the culture of the Fallout world, in its music, its design and its fashion, is permanently stuck in the mid-20th century. The inherent humor in the juxtaposition the game creates between its cozy ’50s aesthetic and its hyperviolent imagery is clear, but there’s nothing in the world itself that justifies this, or at least expands on it. The series’s retrofuturism is its longest running joke, and by combining many different facets of ’50s American culture—from naive technological optimism to the focus on the nuclear family and the beginnings of consumer capitalism—and coupling this with nuclear fallout on a planetary scale, the Fallout games have always carried with them an ironic commentary on the so-called “Atomic Age,” in which the power of nuclear energy was viewed as something that could change the world.

There is no reason why the culture of the Fallout world is permanently stuck in the mid-20th century.

In the United States, the rise of nuclear energy came to a slowdown in the 1970s, and halted drastically after the Three Mile Island accident in 1979, when a nuclear meltdown took place in one of the two nuclear reactors in Dauphin County, Pennsylvania. After this, public support for nuclear power in the U.S. dropped significantly. Globally, the Chernobyl disaster of 1986 and the Fukushima Daiichi disaster of 2011 continue to fuel debate regarding the safety of nuclear power. But in the Fallout series, it’s the harnessing of nuclear power that is key, both to its society’s technological advances, and its eventual self-destruction. The historian William Knoblauch writes of Fallout 3 that “the game’s reliance on 1950s imagery suggests that nuclear war was only ever really possible during the early Cold War. Put simply, Fallout 3’s apocalypse is born of a distant, but culturally familiar, 1950s era.” Maybe the answer to “Why the ’50s?” is simply that without the ’50s, Fallout wouldn’t be Fallout. Perhaps there is no other decade in which the cultural and political climate could be as severely juxtaposed with nuclear annihilation as that of the 1950s.

In many ways then, Fallout 4, like the rest of the series, is a satire of the 1950s and that decade’s dream of a science-fiction utopia. The wastelands of the series have always been strewn with the burnt-out remains of these dreams, yet because of the distance in time and in culture between the remains of the old world and the reality of the new, the kitsch ’50s culture of the pre-war world always appeared ridiculous. By allowing the player to begin the game in that setting, and to play as someone who lived in that world, the game loses this distance from events, and therefore, so does the player. Being cast as somebody whose life is contained in the remains of the old world means that what before was considered ironic now has to be taken in with a very straight face.

How is it possible to roleplay as anything but a parent pining for their child and for their home? But it’s still Fallout, and so this is perfectly possible. After hours spent wandering the wasteland, the memory of that brief stint in a past life will vanish, and all the leftovers of the old world will revert back to what they were in every other Fallout game: stuff to pick up. But maybe this is where Fallout 4’s settlements, in which players can rebuild broken-down towns and homes with junk assembled along their journeys, will come in. Perhaps it can all be built back up, that image of the old world, and a son will be found, and that past life can begin again.

This post appears courtesy of Kill Screen​.











 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2015 06:44

Atlantic Monthly Contributors's Blog

Atlantic Monthly Contributors
Atlantic Monthly Contributors isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Atlantic Monthly Contributors's blog with rss.