Jeff Jarvis's Blog, page 12

August 18, 2019

Worries


Professor Rosen gave me homework. He told me he wanted me to prepare a list like his, of the top problems he sees in journalism. I do not take an assignment from my academic mentor lightly and so it took me time to contemplate my greatest worries. When I did, I found links among them: Trump and all that comes with him, of course; race; the opportunity at last to listen to unheard voices; fear of criticism; fear of change — there’s a bit of all that in all of them. I second Jay’s current concerns and add my own:


The need to study our impact and consider our outcomes

Oh, I hear a lot of talk about impact in journalism but it is reliably egocentric: ‘What did my story accomplish?’ Impact starts with journalists, not the public. And it’s always positive in discussion. I rarely hear talk of our negative impact, how we in media polarize, fabricating and pitting sides against each other, exploiting attention with appeals to base instincts.


Coming to a university I learned the need to begin curriculum with outcomes: What should students learn? I wonder about outcomes-based journalism, which would begin by asking not just what the public needs to know (our supposed mission) but how we can improve the quality of the public conversation, how we can bring out voices rarely heard, how we can build bridges among communities in conflict, how we can appeal to the better nature of our citizens, how we can help build a better society.


If we did that, our metrics of success would be entirely different — not audience, attention, pageviews, clicks, even subscriptions. Thus our business models must change; more on that below. We cannot begin this process until we respect the public’s voices and build means to better listen to them. We also need research to understand communities’ needs and our impact on them. This is not nearly so practical a worry as Jay’s are, but it’s my biggest concern.


The need for self-criticism in journalism

What troubled me most about New York Times Executive Editor Dean Baquet’s round of interviews after the Unity vs. Racism headline debacle is an apparent unwillingness to hear outside critics, even while arguing that the paper doesn’t need an ombudsman because it has outside critics. Baquet dismissed politicians — Beto, AOC, Castro — who had legitimate criticism of the paper, saying: “I don’t need the entire political field to tell me we wrote a bad headline.” When told that Twitterati were criticizing the headline, Bacquet told his staff: “My reaction was to essentially say, ‘Fuck ’em, we’re already working on it.’” (Dismissing what citizens have to say on Twitter is a Times sport.) More worrisome to me from Slate’s transcript of the newsroom meeting was the evidence (as I said in a comment on Jay’s post) that Timespeople are scared of talking with each other. So one wonders how this family will ever work it all out. The most eloquent statement in the meeting came from a journalist who chose to remain anonymous in his own newsroom. Though I want to keep this short, I will quote it in full:



Saying something like divisive or racially charged is so euphemistic. Our stylebook would never allow it in other circumstances. I am concerned that the Times is failing to rise to the challenge of a historical moment. What I have heard from top leadership is a conservative approach that I don’t think honors the Times’ powerful history of adversarial journalism. I think that the NYT’s leadership, perhaps in an effort to preserve the institution of the Times, is allowing itself to be boxed in and hamstrung. This obviously applies to the race coverage. The headline represented utter denial, unawareness of what we can all observe with our eyes and ears. It was pure face value. I think this actually ends up doing the opposite of what the leadership claims it does. A headline like that simply amplifies without critique the desired narrative of the most powerful figure in the country. If the Times’ mission is now to take at face value and simply repeat the claims of the powerful, that’s news to me. I’m not sure the Times’ leadership appreciates the damage it does to our reputation and standing when we fail to call things like they are.



I don’t mean to join the Times pile-on; like Jay, I remain a loyalist and a subscriber. I also don’t mean to make The Times emblematic of all journalism; it is the grand exception. I use this episode as one example of how we journalists who criticize anyone do not let just anyone criticize us. Here I argue we need to consider — as Facebook, of all institutions, is — a systematic means of oversight of the quality of journalism as a necessity to build (not rebuild) trust. Instead, we tend to codify the way we’ve always done things — and wonder at the daily miracle of a front page — as if the goal is to recapture some Golden Age that never was.


Race

Race is not the story of the moment. It is the story of the age that is finally in the moment in media. As a child of white privilege who grew up being taught the mythical ideal of the melting pot, I unlearn those lessons and learn more about racism in America every day. I learn mostly from the voices who were not heard in mass, mainstream media. I hear them now because they have a path around media (and then sometimes into media) thanks in considerable measure to the internet.


Race is a big story in media now not because of Donald Trump and his white nationalists. That gets things in the wrong order and gives credit to the devil. First, race is the story now because people of color can be heard and that is what scares the old, white men in power so much that they would rather burn down our institutions than share them — which is what has finally grabbed the attention of old, white media, so race is now news.


But it is apparent that media do not know how to cover this story. I don’t know how to, either. I am grateful for the publication — as I write this — of The New York Times’ and Nikole Hannah-Jones’ profoundly important 1619 Project and its curriculum. That’s not a worry; that’s gratitude. Yet it comes even as The Times itself grapples (above) with how to cover race and how to hear new voices. This is that hard.


Moral panic

Because I treasure those new voices I can now hear, because I value the expression the net brings to so many more communities, I want to protect the net and its freedoms. I see attacks on those freedoms from the right — from authoritarians abroad and right-wing white nationalists here. I also see attacks on the net and its freedoms from media (who never acknowledge their conflict of interest and jealousy over lost attention and revenue) and the left (who are attacking big corporations). I complained about the quality of tech-policy coverage here.


Coverage of Section 230, the key 26 words in law that enable the conversation online and empower companies to improve it, has been abysmal. I hate to pick on The Times again, but its coverage has been among the worst, with the most humiliating correction I’ve seen in years. There’s a wonderful book about 230 by Jeffrey Kosseff, The Twenty-Six Words that Created the Internet, but it seems that reporters covering the story can’t be bothered to read even that. I am grateful that Trevor Timm, executive director of the Freedom of the Press Foundation, wrote this: “Liberals Beware: Repealing a Law That Protects Free Speech Online Will Only Help Trump.” Says Tims:



Those simple words are now being flagrantly misinterpreted across the political spectrum as a way to threaten companies like Facebook and Twitter. But make no mistake: if the law is repealed, the real casualties will not be the tech giants; it will the hundreds of millions of Americans who use the internet to communicate.



have been worrying about moral panic over technology in media that is helping to fuel an exploitive and cynical moral panic among politicians, to damage the net and the new companies that challenge all of them and their power. My worries only worsen.


Trump’s chumps

Here I lump together my fears about the state of political journalism, campaign coverage, disinformation, and manipulation. As Jay has been arguing and strenuously, the press has no strategy for covering the intentional aberration that is Donald Trump or the racism he exploits and propels. The press continues to insist on covering his “base,” a minority, rather than his opponents, a majority, which only gives more attention to the angry white man and less to voices still ignored. As many of us have been arguing, predictions do nothing to inform the electorate, but predicting is what pundits do (usually incorrectly). As James Carey argued, the polls upon which the pundits hang their predictions are anathema to democracy, for they preempt the public conversation they are meant to measure. Trump, the Russians, right-wing trolls, and too many others to imagine are taking the press as chumps, exploiting their weaknesses (“we just report, we don’t decide”) to turn news organizations into amplifiers for their dangerous ideas. (See the Times discussion face value above.) I see nothing to say that the political press has learned a single lesson. I’m plenty worried about that.


Business

Of course, no list of worries about journalism is complete without existential fretting over business and the lack of any clear path to sustainability. There likely is no path to profitability for journalism as it was. The only way we are going to save journalism is to fundamentally reconsider it: to recognize at last all the new opportunities technology brings us to do more than produce a product we call content but instead to provide a service to the public; to build the means to listen to voices not heard before and, as I said above, to build bridges among communities; to bring value to people’s lives and communities and find value ourselves in that, basing our metrics of success there. The business of journalism is what I worry and write about more than anything else, so I won’t go on at length here. I join with Jay’s concern. I worry that newspapers continue to believe they can new find ways to sell their old ways; see Josh Benton’s frightening and insightful analysis of news on the L.A. Times’ subscriptions. I fear that Gannett and Gatehouse have no strategy and neither do most newspaper companies. I even worry that Google, Facebook, and the rest of the net are still built on mass media’s faulty, volume-based business model. I worry a lot. Then I remind myself that it’s still early days.







As I write this, I’m halfway through teaching our incoming class at the Newmark J-School about the context of their upcoming study and work: the history of media and journalism, the business and how we got here, and the new opportunities we have to reconsider journalism. I tell them it is their responsibility to reinvent journalism.


My favorite moments come when students challenge me. Friday one student did that, asking what I — and my generation in journalism — did wrong to get us in this fix. It was a good question and sadly I had many answers: about not listening to communities, about importing our flawed business model onto the net, about my overblown optimism for hyperlocal blogs as building blocks for new ecosystems. (I will try to post audio of the discussion soon.)


In that spirit, I should anticipate the question about my worries here: And what are you doing about them? These worries do inform my work. One thread you see in everything above is the need to listen to, respect, empathize with, and serve communities who for too long were not heard; this is what inspired the start of Social Journalism at my school. Now I am working on bringing other disciplines into a journalism school — anthropology, neuroscience, psychology, economics, philosophy, design—to consider how they would address society’s problems and the outcomes they would work toward. I am proud to work at a school where diversity is at the core of our strategy and we are starting new programs to address racial equity and inclusion in media leadership and ownership. Regarding moral panic in media coverage, I am working to organize training for reporters in coverage of major policy issues like Section 230. Regarding disinformation, I am working on projects to bring more attention and support to quality news. Whether any of those are the right paths, I will leave to others to judge.


Jay Rosen updates his list of concerns and problems and I will try to do the same as warranted. In the meantime, tell me: What problems worry you? What do you want to do about them?






The post Worries appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on August 18, 2019 12:39

August 13, 2019

Governance: Facebook designs its oversight board (should journalism?)

Facebook is devoting impressive resources — months of time and untold millions of dollars — to developing systems of governance, of its users and of itself, raising fascinating questions about who governs whom according to what rules and principles, with what accountability. I’d like to ask similar questions about journalism.


I just spent a day at Facebook’s fallen skyscraper of a headquarters attending one of the last of more than two dozen workshops it has held to solicit input on its plans to start an Oversight Board. [Disclosures: Facebook paid for participants’ hotel rooms and I’ve raised money from Facebook for my school.] Weeks ago, I attended another such meeting in New York. In that time, the concept has advanced considerably. Most importantly, in New York, the participants were worried that the board would be merely an appeals court for disputes over content take-downs. Now it is clear that Facebook knows such a board must advise and openly press Facebook on bigger policy issues.


Facebook’s team showed the latest group of academics and others a near-final draft of a board charter (which will be released in a few weeks, in 20-plus languages). They are working on by-laws and finalizing legal structures for independence. They’ve thought through myriad details about how cases will rise (from users and Facebook) and be taken up by the board (at the board’s discretion); about conflict resolution and consensus; about transparency in board membership but anonymity in board decisions; about how members will be selected (after the first members join, the board will select its own members); about what the board will start with (content takedowns) and what it can tackle later (content demotion and taking down users, pages, groups — and ads); about how to deal with GDPR and other privacy regulation in sharing information about cases with the board; about how the board’s precedents will be considered but will not prevent the board from changing its mind; even about how other platforms could join the effort. They have grappled with most every structural, procedural, and legal question the 2,000 people they’ve consulted could imagine.


But as I sat there I saw something missing: the larger goal and soul of the effort and thus of the company and the communities it wants to foster. They have structured this effort around a belief, which I share, in the value of freedom of expression, and the need — recognized too late — to find ways to monitor and constrain that freedom when it is abused and used to abuse. But that is largely a negative: how and why speech (or as Facebook, media, and regulators all unfortunately refer to it: content) will be limited.


Facebook’s Community Standards — in essence, the statutes the Oversight Board will interpret and enforce and suggest to revise — are similarly expressed in the negative: what speech is not allowed and how the platform can maintain safety and promote voice and equality among its users by dealing with violations. In its Community Standards (set by Facebook and not by the community, by the way), there are nods to higher ends — sharing stories, seeing the world through others’ eyes, diversity, equity, empowerment. But then the Community Standards becomes a document about what users should not do. And none of the documents says much if anything about Facebook’s own obligations.


So in California, I wondered aloud what principles the Oversight Board would call upon in its decisions. More crucially, I wondered whom the board is meant to serve and represent: does it operate in loco civitas (in place of the community), publico (public), imperium (government and regulators), or Deus, (God — that is, higher ethics and standards)? [Anybody with better schooling than I had, please correct my effort at Latin.]


I think these documents, this effort, and this company — along with other tech companies — need a set of principles that should set forth:



Higher goals. Why are people coming to Facebook? What do they want to create? What does the company want to build? What good will it bring to the world? Why does it exist? For whose benefit? Zuckerberg issued a new mission statement in 2017: “To give people the power to build community and bring the world closer together.” And that is fine as far as it goes, but that’s not very far. What does this mean? What should we expect Facebook to be? This statement of goals should be the North Star that guides not just the Oversight Board but every employee and every user at Facebook.
A covenant with users and the public in which Facebook holds itself accountable for its own responsibilities and goals. As an executive from another tech company told me, terms of service and community standards are written to regulate the behavior of users, not companies. Well, companies should put forth their own promises and principles and draw them up in collaboration with users (civitas), the public (publico), and regulators (imperium). And that gives government — as in the case of proposed French legislation — the basis for holding the company accountable.

I’ll explore these ideas further in a moment, but first let me first address the elephant on my keyboard: whether Facebook and its founder and executives and employees have a soul. I’ve been getting a good dose of crap on Twitter the last few days from people who blithely declare — and others who retweet the declaration — that Zuckerberg is the most dangerous man on earth. I respond: Oh, come on. My dangerous-person list nowadays starts with Trump, Murdoch, Putin, Xi, Kim, Duterte, Orbán, Erdoğan, MBS…you get the idea. To which these people respond: But you’re defending Facebook. I will defend it and its founder from ridiculous, click-bait trolling that devalues the real danger our world is in today. I also criticize Facebook publicly and did at the meetings I attended there. Facebook has fucked up plenty lately and that’s why it needs oversight. At least they realize it.


When I defend internet platforms against what I see as media’s growing moral panic, irresponsible reporting, and conflict of interest, I’m defending the internet itself and the freedoms it affords from what I fear will be continuing regulation of our own speech and freedom. I don’t oppose regulation; I have been proposing what I see as reasonable regimes. But I worry about where a growing unholy alliance against the internet between the far right and technophes in media will end.


That is why I attend meetings such as the ones that Facebook convenes and why I just spent two weeks in California meeting with both platform and newspaper executives, to try to build bridges and constructive relationships. That’s why I take Facebook’s effort to build its Oversight Board seriously, to hold them to account.


Indeed, as I sat in a conference room at Facebook hearing its plans, it occurred to me that journalism as a profession and news organizations individually would do well to follow this example. We in journalism have no oversight, having ousted most ombudsmen who tried to offer at least some self-reflection and -criticism (and having failed in the UK to come up with a press council that isn’t a sham). We journalists make no covenants with the public we serve. We refuse to acknowledge — as Facebook executives did acknowledge about their own company — our “trust deficit.”


We in journalism do love to give awards to each other. But we do not have a means to systematically identify and criticize bad journalism. That job has now fallen to, of all unlikely people, politicians, as Beto O’Rourke, Alexandria Ocasio-Cortez, and Julian Castro offer quite legitimate criticism of our field. It also falls to technologists, lawyers, and academics who have been appalled at, for example, The New York Times’ horrendously erroneous and dangerous coverage of Section 230, our best protection of freedom of expression on the internet in America. I’m delighted that CJR has hired independent ombudsmen for The Times, The Post, CNN, and MSNBC. But what about Fox and the rest of the field?


I’ve been wondering how one might structure an oversight board for journalism to take the place of all those lost ombudsmen, to take complaints about bad journalism, to deliberate thoughtful and constructive responses, and to build data about the journalistic performance and responsibility of specific outlets. That will be a discussion for another day, soon. But even with such a structure, journalism, too — and each news outlet — should offer covenants with the public containing their own promises and statements of higher goals. I don’t just mean following standards for behavior; I mean sharing our highest ambitions.


I think such covenants for Facebook (and social networks and internet platforms) and journalism would do well to start with the mission of journalism that I teach: to convene communities into respectful, informed, and productive conversation. Democracy is conversation. Journalism is — or should be — conversation. The internet is built for conversation. The institutions and companies that serve the public conversation should promise they will do everything in their power to serve and improve that conversation. So here is the beginning of the kind of covenant I would like to see from Facebook:


Facebook should promise to create a safe environment where people can share their stories with each other to build bridges to understanding and to make strangers less strange. (So should journalism.)


Facebook should promise to enable and empower new and diverse voices that have been deprived of privilege and power by existing, entrenched institutions. (Including journalism.)


Facebook should promise to build systems that reward positive, productive, useful, respectful behavior among communities. (So should journalism.)


Facebook should promise not to build mechanisms to polarize people and inflame conflict. (So should journalism.)


Facebook should promise to help inform conversations by providing the means to find reliable information. (Journalism should provide that information.)


Facebook should promise not to build its business upon and enable others to benefit from crass attempts to exploit attention. (So should the news and media industries.)


Facebook should warrant to protect and respect users’ privacy, agency, and dignity.


Facebook should recognize that malign actors will exploit weak systems of protection to drive people apart and so it should promise to guard against being used to manipulate and deceive. (So should journalism.)


Facebook should share data about its performance against these goals, about its impact on the public conversation, and about the health of that conversation with researchers. (If only journalism had such data to share.)


Facebook should build its business, its tools, its rewards, and its judgment of itself around new metrics that measure its contributions to the health and constructive vitality of the public conversation and the value it brings to communities and people’s lives. (So should journalism.)


Clearly, journalism’s covenants with the public should contain more: about investigating and holding power to account, about educating citizens and informing the public conversation, and more. That’s for another day. But here’s a start for both institutions. They have more in common than they know.


The post Governance: Facebook designs its oversight board (should journalism?) appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on August 13, 2019 11:30

August 5, 2019

Beto to journalism: ‘What the fuck?’






This post began with Beto O’Rourke’s lesson. Then I added Alexandria Ocasio-Cortez. And then Eddie Glaude Jr.’s.







Reporter : Is there anything in your mind the President can do to make this better?

Beto O’Rourke: What do you think? You know the shit he’s been saying. He’s been calling Mexican immigrants rapists. I don’t know, members of the press, what the fuck? [Reporter tries to interrupt.] Hold on a second. You know, it’s these questions that you know the answers to. I mean, connect the dots about what he’s been doing in this country. He’s not tolerating racism; he’s promoting racism. He’s not tolerating violence; he’s inciting racism and violence in this country…. I don’t know what kind of question that is.


O’Rourke’s scolding of the press is well-deserved. Allow me to translate it into a few rules to report by.


Tell the truth. Speak the word. If you prevaricate, refusing to call what you see racism or what you hear lies, you give license to the public to do the same and give license to the racists and liars to get away with it.


Stop getting other people to say what you should. It’s a journalistic trick as old as pencils: Asking someone else about racism so you don’t have to say it yourself.


It is not your job to ask stupid questions. Like Beto, I’ve had it with the milquetoast journalistic strategy of asking obvious questions to which we know the answer because “that’s our job, we just ask questions.” Arguing that you are asking these questions in loco publico only insults the public we serve.


You are not a tape recorder. Repeating lies and hate without context, correction, or condemnation makes you an accessory to the crimes. That goes for racists’ manifestos as well as racists at press conferences.


Do not accept bad answers. Follow up your questions. Follow up other reporters’ questions. Just because you’ve checked off your question doesn’t mean your work here is done.


Listen. Do not come to the story with blanks ready to fill in the narrative you’ve already imagined and pitched. Listen first. Learn.


Be human. You are are not separate from the community you serve; you are part of it. You are not objective; you have a worldview. You cannot hide that worldview; be transparent.


Be honest. The standard you work under as a journalist — the thing that separates your words from others’ — should be intellectual honesty. That is, report inconvenient truths.


Improve the world. You exist to serve the public conversation, not to incite conflict, not to pit sides against each other, not to make the world worse.


Finally, I’ll add: You’re not the public’s protector. If Beto says “what the fuck?” then I say report his words; spare us your asterisks.


We live in unusual times so usual methods will not suffice. We need new strategies to report on new dangers or we will be complicit in the result.







Moments after I posted this, I saw that Alexandria Ocasio-Cortez also offered excellent advice for journalists. Unusual times, indeed, when politicians know better how to do journalism than too many journalists. She tweeted:






















Racism is the most important story of the day. It has been the most important story of the age in America but it was not the biggest story in news until now. That has happened only because we have an obvious racist in the White House and racists supporting him and now they cannot hide from the recognition and media cannot hide from covering the story. So take this good advice.







And then I saw Professor Eddie Glaude, Jr. on Nicolle Wallace’s MSNBC show deliver a vital, forceful, profound, brilliant lesson in racism in America. Please watch again and again.









The post Beto to journalism: ‘What the fuck?’ appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on August 05, 2019 16:51

July 26, 2019

Evidence, please


[Disclosure: I raised money for my school from Facebook to aggregate signals of quality in news. I also have attended events convened by Google. I am independent of and receive no compensation personally from any technology company.]


Too many momentous decisions about the future of the internet and its regulation — as well as coverage in media — are being made on the basis of assumptions, fears, theories, myths, mere metaphors, isolated incidents, and hidden self-interest, not evidence. The discussion about the internet and the future should begin with questions and research and end with demonstrable facts, not with presumption or with what I fear most in media: moral panic. I will beg journalists to take on academics’ discipline of evidence over anecdote.


But first, let me praise an example of the kind of analysis we need. Axel Bruns, a professor at Queensland University of Technology, just presented an excellent paper at the International Association for Media and Communication Research conference in Madrid, sticking a pin in the idea of the filter bubble. He argues



that echo chambers and filter bubbles principally constitute an unfounded moral panic that presents a convenient technological scapegoat (search and social platforms and their affordances and algorithms) for a much more critical problem: growing social and political polarisation. But this is a problem that has fundamentally social and societal causes, and therefore cannot be solved by technological means alone. [My emphasis]



Amen.


Based on his reading of available research, Bruns notes that these two metaphors — echo chamber and filter bubble — are not consistently defined, “making them moving targets in both public discourse and scholarly inquiry,” which also makes it impossible to “assess more systematically exactly how disconnected the denizens of such suspected echo chambers and filter bubbles really are.” In his upcoming book, Are Filter Bubbles Real?, Bruns will examine definitions of both metaphors and methodologies for measurement of their alleged impact.


In his paper, Bruns provides perspective and context, pointing out that well before the net, “different groups in society have always already informed themselves from different sources that suited their specific informational interests, needs, or literacies.” He asks: “Given that society and democracy have persisted nonetheless, should we even worry about them?” In short, the burden is on those who propagate these notions to answer the question: “What is new here, and how different is it from before?”


Further, Bruns points out that we live in a “complex and interwoven media ecology” and so it is foolhardy to argue that one factor in it — just Facebook, for example — is the direct cause of behavioral change. Too many rants about the impact of the internet in media ignore the impact of media. Wonder why.


As an academic, Bruns reads existing literature in search of evidence of filter bubbles and echo chambers in prior research. He doesn’t find much at all. Instead, he cites (with links here and full citations in Bruns’ paper):



Three separate studies found the opposite of what Eli Pariser reported in his book,  The Filter Bubble:  Google Search and News users are not presented with unique and isolating worldviews.
Earlier studies of the bifurcated blog world 15 years ago uncovered “only mild echo chambers.”
The Pew Research Center found that Facebook users do not select friends based on political leaning and thus are exposed to other worldviews in social media.
Two studies looked at already divisive topics — abortion, vaccination, Obamacare, gun control — and found, of course, they were also divisive online, though non-political but debatable topics — Game of Thrones and food porn — did not lead to polarization online. Is divisiveness online the cause or the effect?
“Social media users generally encounter a greater diversity of newssources than non-users do.”
“Those users frequenting the most extremely partisan conservative sites in the United States have been found also to be more likely than ordinary internet users to visit the centrist New York Times.”
“Exposure to highly partisan political information … does not come at the expense of contact with other viewpoints.”
In sum, a half-dozen academics argue, “at present there is little empirical evidence that warrants any worries about filter bubbles.”

Yet in mediano end of stories still warn of filter bubbles. Though not all. Some journalists are reporting on studies that question the filter bubble. Good. A new study comes out and sometimes, it will get coverage. But that leads to another journalistic weakness in reporting academic studies: stories that takes the latest word as the last word. Look at all the perennial, flip-flopping reports that wine will kill or save us. Journalists should do what academics do in their literature reviews: put the latest word in context. They should also do what, for example, Oxford’s Rasmus Kleis Nielsen does on Twitter, responding to assumptions with findings in research.


Now that we have tools like Google Scholar — and many scholarly (if, unfortunately, costly) databases — I urge reporters and editors to do their own academic literature reviews when a story is pitched or assigned, to make sure its premise is upheld by research thus far, to provide context and nuance, and to grapple with what will surely appear: contradictory information.


But I urge them to begin — as Bruns ends his paper — with questions before answers.



The central question now is what [people] do with such information when they encounter it: do they dismiss it immediately as running counter to their own views? Do they engage in a critical reading, turning it into material to support their own worldview, perhaps as evidence for their own conspiracy theories? Do they respond by offering counter-arguments, by vocally and even violently disagreeing, by making ad hominem attacks, or by knowingly disseminating all-out lies as ‘alternative facts’? More important yet, why do they do so? What is it that has so entrenched and cemented their beliefs that they are no longer open to contestation? This is the debate we need to have: not a proxy argument about the impact of platforms and algorithms, but a meaningful discussion about the complex and compound causes of political and societal polarisation. The ‘echo chamber’ and ‘filter bubble’ metaphors have kept us from pursuing that debate, and must now be put to rest.



Amen again.


These easy metaphors carry ill-defined presumptions that do not inform debate. Neither do terms that media love to appropriate and escalate. “Surveillance capitalism” is an extreme name for advertising cookies and the use of the word devalues the seriousness of actual surveillance by governments including my own. See also this very good commentary from Andrew Przybylski and Amy Orben of the Oxford Internet Institute, arguing that internet use is by no means “addiction.”







The state of media coverage of technology and society sucks. It sucked before by being utopian. It sucks now by being dystopian. I tire of the Damascene conversions of both former technologists (having safely cashed out) and of tech reporters who signal their virtue by distancing themselves from what they helped build or build up. I am disappointed that I never see media folk acknowledge their own conflict of interest about competing with the technology companies they cover and about their employers’ attempts to cash in political capital for the sake of protectionism against the platforms. I worry about the impact of this technology coverage on the future and freedoms of the net. (What interventions are being legislated based on emotional and vague concepts like filter bubble, echo chamber, surveillance, and addiction?) I worry, too, as Bruns does, that we are missing the real problem and real story: the roots of anger and polarization in society today. (It ain’t Twitter and you know it; start by examining racism.) I am angry to see journalists condescend to the public they serve, treating people as gullible fools who can be corrupted by a mere meme. I am even angrier to see journalists abandon social media and with it all the new voices who were never heard in mass media but now can speak. And I’m sad to see such simplistic, lazy, and poor quality coverage from my field.


Yes, of course, the technology companies have garnered power and wealth that merits close scrutiny. Yes, those companies fuck up and so I, too, am looking for useful regulatory regimes. But our coverage of society’s problems today should not begin and end on El Camino Real. We are too often covering the effect over the cause.


I wish both media and policymakers would follow the example of academics like Bruns (I use him just as an example; there are so many more). Begin with questions. Study the research that exists. Use data. Call for more research. Before making technology companies responsible for every modern ill — the definition of moral panic — make them instead responsible for sharing data to feed that research. And let that research concentrate not on technology and its impact on people — which too often gives people too little credit and agency. Instead let research and reporting look more carefully at how people are using the technology to have an impact on each other. Start by respecting those people and learning from them before condemning and dismissing them. Through fits and starts and missteps and mistakes — sometimes with, sometimes in spite of the companies involved — we the users are building a new society on the net. Watch, listen, and learn before criticizing, dismissing, and condemning. If it sounds like I want journalism to learn from anthropology, I do. More on that soon.


The post Evidence, please appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on July 26, 2019 11:12

June 10, 2019

News Publishers Go To War With the Internet — and We All Lose


Around the world, news industry trade associations are corruptly cashing in their political capital — which they have because their members are newspapers, and politicians are scared of them — in desperate acts of protectionism to attack platform companies. The result is a raft of legislation that will damage the internet and in the end hurt everyone, including journalists and especially citizens.


As I was sitting in the airport leaving Newsgeist Europe, a convening for journalists and publishers [disclosure: Google pays for the venue, food, and considerable drink; participants pay their own travel], my Twitter feed lit up like the Macy’s fireworks as The New York Times reported — or rather, all but photocopied — a press release from the News Media Alliance (née Newspaper Association of America) contending that Google makes $4.7 billion a year from news, at the expense of news publishers.


Bullshit.


The Times story itself is appalling as it swallowed the News Media Alliance’s PR whole, quoting people from the association and not including comment from Google until hours later. Many on Twitter were aghast at the poor journalism. I contacted Google PR, who said The Times did not reach out to the person who normally speaks on these matters or anyone in the company’s Washington office. Google sent me their statement:


These back of the envelope calculations are inaccurate as a number of experts are pointing out. The overwhelming number of news queries do not show ads. The study ignores the value Google provides. Every month Google News and Google Search drives over 10 billion clicks to publishers’ websites, which drive subscriptions and significant ad revenue. We’ve worked very hard to be a collaborative and supportive technology and advertising partner to news publishers worldwide.


The “study” upon which The Times (and others) relied is, to say the least, specious. No, it’s humiliating. I want to dispatch with its fallacies quickly — to get to my larger point, about the danger legacy news publishers are posing to the future of news and the internet — and that won’t be hard. The study collapses in its second paragraph:


Google has emerged as a major gateway for consumers to access news. In 2011, Google Search combined with Google News accounted for the majority (approximately 75%) of referral traffic to top news sites. Since January 2017, traffic from Google Search to news publisher sites has risen by more than 25% to approximately 1.6 billion visits per week in January 2018. Corresponding with consumers’ shift towards Google for news consumption, news is becoming increasingly important to Google, as demonstrated by an increase in Google searches about news.


And that, ladies and gentlemen, is great news for news. For as anyone under the age of 99 understands, Google sends readers to sites based on links from search and other products. That Google is emphasizing news and currency more is good for publishers, as that sends them readers. (That 10-billion-click number Google cited above is eight years old and so I have little doubt it is much higher now thanks to all its efforts around news.)


The problem has long been that publishers aren’t competent at exploiting the full value of these clicks by creating meaningful and valuable ongoing relationships with the people sent their way. So what does Google do? It tries to help publishers by, for example, starting a subscription service that drives more readers to easily subscribe — and join and contribute — to news sites directly from Google pages. The NMA study cites that subscription service as an example of Google emphasizing news and by implication exploiting publishers. It is the opposite. Google started the subscription service because publishers begged for it — I was in the room when they did — and Google listened. The same goes for most every product change the study lists in which Google emphasizes news more. That helps publishers. The study then uses ridiculously limited data (including, crucially, an offhand and often disputed remark 10 years ago by a then-exec at Google about the conceptual value of news) to make leaps over logic to argue that news is important on its services and thus Google owes news publishers a cut of its revenue (which Google gains by offering publishers’ former customers, advertisers, a better deal; it’s called competition). By this logic, Instagram should be buying cat food for every kitty in the land and Reddit owes a fortune to conspiracy theorists.


The real problem here is news publishers’ dogged refusal to understand how the internet has changed their world, throwing the paradigm they understood into the grinder. In the US and Europe, they still contend that Google is taking their “content,” as if quoting and linking to their sites is like a camera stealing their soul. They cannot grok that value on the internet is concentrated not in a product or property called content — articles, headlines, snippets, thumbnails, words — but instead in relationships. Journalism is no longer a factory valued by how many widgets and words it produces but instead by how much it accomplishes for people in their lives. I have tried here and here and in many a meeting in newsrooms and journalism conferences to offer this advice to news publishers — with tangible ideas about how to build a new journalistic business around relationships — but most prove incapable of shifting mindset and strategy beyond valuing content for content’s sake. Editors who do understand are often stymied by their short-sighted publishers and KPIs and soon quit.


Most legacy publishers have come up with no sustainable business strategy for a changing world. So they try to stop the world from changing by unleashing their trade associations [read: lobbyists] on capitals from Brussels to Berlin to London to Melbourne to Washington (see: the NMA’s effort to get an antitrust exemption to go after the platforms for antitrust; its study was prepared to hand to Congress in time for its hearings this week). These trade associations attack the platforms without ever acknowledging the fault of their own members in our current polarization in society. (Yes, I’m talking about, for example, Fox News and other Murdoch properties, dues-paying members of many a trade association. By our silence in journalism and its trade associations in not criticizing their worst, we endorse it.)


The efforts of lobbyists for my industry are causing irreparable harm to the internet. No, Google, Facebook, and Twitter are not the internet, but what is done to them is done to the net. And what’s been done includes horrendous new copyright legislation in the EU that tries to force Google et al to have to negotiate to pay for quoting snippets of content to which they link. Google won’t; it would be a fool to. So I worry that platforms will link to news less and less resulting in self-inflicted harm for the news industry and journalists, but more important hurting the public conversation at exactly the wrong moment. Thanks, publishers. At Newsgeist Europe, I sat in a room filled with journalists terribly worried about the impact of the EU’s copyright directive on their work and their business but I have to say they have no one but their own publishers and lobbyists to blame.


I am tempted to say that I am ashamed of my own industry. But I won’t for two reasons: First, I want to believe that the industry’s lobbyists do not speak for journalists themselves — but I damned well better start hearing the protests of journalists to what their companies are doing. (That includes journalists on the NMA board.) Second, I am coming to see that I’m not part of the media industry but instead that we are all part of something larger, which we now see as the internet. (I’ll be writing more about this idea later.) That means we have a responsibility to criticize and help improve both technology and news companies. What I see instead is too many journalists stirring up moral panic about the internet and its current (by no means permanent) platforms, serving — inadvertently or not — the protectionist strategies of their own bosses, without examining media’s culpability in many of the sins they attribute to technology. (I wish I could discuss this with The New York Times’ ombudsman or any ombudsman in our field, but we know what happened to them.)


My point: We’re in this together. That is why I go to events put on by both the technology and news industries, why I try to help both, why I criticize both, why I try to help build bridges between them. It’s why I am devoting time and effort to my least favorite subject: internet regulation. It is why I am so exasperated at leaders in my own industry for their failure to recognize, adapt to, and exploit the change they try to deny. It’s why I’m disappointed in my own industry for not criticizing itself. Getting politicians who are almost all painfully ignorant about technology to try to define, limit, and regulate that technology and what we can do with it is the last thing we should do. It is irresponsible and dangerous of my industry to try.


The post News Publishers Go To War With the Internet — and We All Lose appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2019 10:32

May 31, 2019

Regulating the net is regulating us


Here are three intertwined posts in one: a report from inside a workshop on Facebook’s Oversight Board; a follow-up on the working group on net regulation I’m part of; and a brief book report on Jeff Kosseff’s new and very good biography of Section 230, The Twenty-Six Words That Created the Internet.


Facebook’s Oversight Board

Last week, I was invited — with about 40 others from law, media, civil society, and the academe — to one of a half-dozen workshops Facebook is holding globally to grapple with the thicket of thorny questions associated with the external oversight board Mark Zuckerberg promised.


(Disclosures: I raised money for my school from Facebook. We are independent and I receive no compensation personally from any platform. The workshop was held under  Chatham House  rule. I declined to sign an NDA and none was then required, but details about to real case studies were off the record.)


You may judge the oversight board as you like: as an earnest attempt to bring order and due process to Facebook’s moderation; as an effort by Facebook to slough off its responsibility onto outsiders; as a PR stunt. Through the two-day workshop, the group kept trying to find an analog for Facebook’s vision of this: Is it an appeals court, a small-claims court, a policy-setting legislature, an advisory council? Facebook said the board will have final say on content moderation appeals regarding Facebook and Instagram and will advise on policy. It’s two mints in one.









The devil is the details. Who is appointed to the board and how? How diverse and by what definitions of diversity are the members of the board selected? Who brings cases to the board (Facebook? people whose content was taken down? people who complained about content? board members?)? How does the board decide what cases to hear? Does the board enforce Facebook policyor can it countermand it? How much access to data about cases and usage will the board have? How much authority will the board have to bring in experts and researchers and what access to data will they have? How does the board scale its decision-making when Facebook receives 3 million reports against content a day? How is consistency found among the decisions of three-member panels in the 40ish-member board? How can a single board in a single global company be consistent across a universe of cultural differences and sensitive to them? As is Facebook’s habit, the event was tightly scheduled with presentations and case studies and so — at least before I had to leave in day two — there was less open debate of these fascinating questions than I’d have liked.


Facebook starts with its 40 pages of community standards, updated about every two weeks, which are in essence its statutes. I recommend you look through them. They are thoughtful and detailed. For example:


A hate organization is defined as: Any association of three or more people that is organized under a name, sign or symbol and that has an ideology, statements or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.


At the workshop, we heard how a policy team sets these rules, how product teams create the tools around them, and how operations — with people in 20 offices around the world, working 24/7, in 50 languages — are trained to enforce them.


But rules — no matter how detailed — are proving insufficient to douse the fires around Facebook. Witness the case, only days after the workshop, of the manipulated Nancy Pelosi video and subsequent cries for Facebook to take it down. I was amazed that so many smart people thought it was an easy matter for Facebook to take down the video because it was false, without acknowledging the precedent that would set requiring Facebook henceforth to rule on the truth of everything everyone says on its platform — something no one should want. Facebook VP for Product Policy and Counterterrorism Monika Bickert (FYI: I interviewed her at a Facebook safety event the week before) said the company demoted the video in News Feed and added a warning to the video. But that wasn’t enough for those out for Facebook’s hide. Here’s a member of the UK Parliament (who was responsible for the Commons report on the net I criticized here):






What standard do you want? Do you wish Facebook to decide truth?


— Jeff Jarvis (@jeffjarvis) May 25, 2019







Jeff it’s already been independently certified as being fake. What Facebook are saying is that they won’t take down known sources of malicious political disinformation.


— Damian Collins (@DamianCollins) May 25, 2019




Damian, are you then going to expect them to take down any other video–or anything else–certified as fake? Certified by whom? Do you also want destruction of the evidence of this manipulation? Beware: slope slippery ahead.


— Jeff Jarvis (@jeffjarvis) May 25, 2019



So by Collins’ standard, if UK politicians in his own party claim as a matter of malicious political disinformation that the country pays £350m per week to the EU that would be freed up for the National Health Service with Brexit and that’s certified by journalists to be “willful distortion,” should Facebook be required to take that statement down? Just asking. It’s not hard to see where this notion of banning falsity goes off the rails and has a deleterious impact on freedom of expression and political discussion.


But politicians want to take bites out of Facebook’s butt. They want to blame Facebook for the ill-informed state of political debate. They want to ignore their own culpability. They want to blame technology and technology companies for what people — citizens — are doing.


Ditto media. Here’s Kara Swisher tearing off her bit of Facebook flesh regarding the Pelosi video: “Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.”


Sigh. The internet is not media. Facebook is not news (only 4% of what appears there is). What you see there is not content. It is conversation. The internet and Facebook are means for the vast majority of citizenry forever locked out of media and of politics to discuss whatever they want, whether you like it or not. Those who want to control that conversation are the privileged and powerful who resent competition from new voices.


By the way, media people: Beware what you wish for when you declare that platforms are media and that they must do this or that, for your wishes could blow back on you and open the door for governments and others to demand that media also erase that which someone declares to be false.


Facebook’s oversight board is trying to mollify its critics — and forestall regulation of it — by meeting their demands to regulate content. Therein lies its weakness, I think: regulating content.


Regulating Actors, Behaviors, or Content

A week before the Facebook workshop, I attended a second meeting of a Transatlantic High Level Working Group on Content Moderation and Freedom of Expression (read: regulation), which I wrote about earlier. At the first meeting, we looked at separating treatment of undesirable content (dealt with under community standards such as Facebook’s) from illegal content (which should be the purview of government and of an internet court; details on that proposal here.)


At this second meeting, one of the brilliant members of the group (held under Chatham House, so I can’t say who) proposed a fundamental shift in how to look at efforts to regulate the internet, proposing an ABC rule separating actors from behaviors from content. (Here’s another take on the latest meeting from a participant.)


It took me time to understand this, but it became clear in our discussion that regulating content is a dangerous path. First, making content illegal is making speech illegal. As long as we have a First Amendment and a Section 230 (more on that below) in the United States, that is a fraught notion. In the UK, a Commons committee recently released an Online Harms White Paper that demonstrates just how dangerous the idea of regulating content can be. The white paper wants to require — under pain of huge financial penalty for companies and executives — that platforms exercise a duty of care to take down “threats to our way of life” that include not only illegal and harmful content (child porn, terrorism) but also legal and harmful content (including trolling [please define] and disinformation [see above]). Can’t they see that government requiring the takedown of legal content makes it illegal? Can’t they see that by not defining harmful content, they put a chill on all speech? For an excellent takedown of the report, see this post by Graham Smith, who says that what the Commons committee is impossibly vague. He writes:


‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.


This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.


Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy…


See: This is the problem when you try to identify, regulate, and eliminate bad content. Smith concludes: “This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.” Nevermind the common analogy to regulation of broadcast. Would we ever suffer such talk about regulating the contents of bookstores or newspapers or — more to the point — conversations in the corner bar?


What becomes clear is that these regulatory methods — private (at Facebook) and public (in the UK and across Europe) — are aimed not at content but ultimately at behavior, only they don’t say so. It is nearly impossible to judge content in isolation. For example, my liberal world is screaming about the slow-Pelosi video. But then what about this video from three years ago?



What makes one abhorrent and one funny? The eye of the beholder? The intent of the creator? Both. Thus content can’t be judged on its own. Context matters. Motive matters. But who is to judge intent and impact and how?


The problem is that politicians and media do not like certain behavior by certain citizens. They cannot figure out how to regulate it at scale (and would prefer not to make the often unpopular decisions required), so they assign the task to intermediaries — platforms. Pols also cannot figure out how to define the bad behavior they want to forbid, so they decide instead to turn an act into a thing — content — and outlaw that under vague rules they expect intermediaries to enforce … or else.


The intermediaries, in turn, cannot figure out how to take this task on at scale and without risk. In an excellent Harvard Law Review paper called The New Governors: The People, Rules, and Processes Governing Online Speech, legal scholar Kate Klonick explains that the platforms began by setting standards. Facebook’s early content moderation guide was a page long, “so it was things like Hitler and naked people,” says early Facebook community exec Dave Willner. Charlotte Willner, who worked in customer service then (they’re now married), said moderators were told “if it makes you feel bad in your gut, then go ahead and take it down.” But standards — or statements of values— don’t scale as they are “often vague and open ended” and can be “subject to arbitrary and/or prejudiced enforcement.” And algos don’t grok values. So the platforms had to shift from standards to rules. “Rules are comparatively cheap and easy to enforce,” says Klonick, “but they can be over- and underinclusive and, thus, can lead to unfair results. Rules permit little discretion and in this sense limit the whims of decisionmakers, but they also can contain gaps and conflicts, creating complexity and litigation.” That’s where we are today. Thus Facebook’s systems, algorithmic and human, followed its rules when they came across the historic photo of a child in a napalm attack. Child? Check. Naked? Check. At risk? Check. Take it down. The rules and the systems of enforcement could not cope with the idea that what was indecent in that photo was the napalm.


Thus the platforms found their rule-led moderators and especially their algorithms needed nuance. Thus the proposal for Facebook’s Oversight Board. Thus the proposal for internet courts. These are attempts to bring human judgment back into the process. They attempt to bring back the context that standards provide over rules. As they do their work, I predict these boards and courts will inevitably shift from debating the acceptability of speech to trying to discern the intent of speakers and the impact on listeners. They won’t be regulating a thing: content. They will be regulating the behavior of actors: us.


There are additional weaknesses to the rules-based, content-based approach. One is that community standards are rarely set by the communities themselves; they are imposed on communities by companies. How could it be otherwise? I remember long ago that Zuckerberg proposed creating a crowdsourced constitution for Facebook but that quickly proved unwieldy. I still wonder whether there are creative ways to get intentional and explicit judgments from communities as to what is and isn’t acceptable for them — if not in a global service, then user-by-user or community-by-community. A second weakness of the community standards approach is that these rules bind users but not platforms. I argued in a prior post that platforms should create two-way covenants with their communities, making assurances of what the company will deliver so it can be held accountable.


Earlier this month, the French government proposed an admirably  that tries to address a few of those issues. French authorities spent months embedded in Facebook in a role-playing exercise to understand how they could regulate the platform. I met a regulator in charge of this effort and was impressed with his nuanced, sensible, smart, and calm sense of the task. The proposal does not want to regulate content directly — as the Germans do with their hate speech law, called NetzDG, and as the Brits propose to do going after online harms.


Instead, the French want to hold the platforms accountable for enforcing the standards and promises they set: say what you do, do what you say. That enables each platform and community to have its own appropriate standards (Reddit ain’t Facebook). It motivates platforms to work with their users to set standards. It enables government and civil society to consult on how standards are set. It requires platforms to provide data about their performance and impact to regulators as well as researchers. And it holds companies accountable for whether they do what they say they will do. It enables the platforms to still self-regulate and brings credibility through transparency to those efforts. Though simpler than other schemes, this is still complex, as the world’s most complicated PowerPoint slide illustrates:







I disagree with some of what the French argue. They call the platforms media (see my argument above). They also want to regulate only the three to five largest social platforms — Facebook, YouTube, Twitter— because they have greater impact (and because that’s easier for the regulators). Except as soon as certain groups are shooed out of those big platforms, they will dig into small platforms, feeling marginalized and perhaps radicalized, and do their damage from there. The French think some of those sites are toxic and can’t be regulated.


All of these efforts — Facebook’s oversight board, the French regulator, any proposed internet court — need to be undertaken with a clear understanding of the complexity, size, and speed of the task. I do not buy cynical arguments that social platforms want terrorism and hate speech kept up because they make money on it; bull. In Facebook’s workshop and in discussions with people at various of the platforms, I’ve gained respect for the difficulty of their work and the sincerity of their efforts. I recommend Klonick’s paper as she attempts to start with an understanding of what these companies do, arguing that


platforms have created a voluntary system of self-regulation because they are economically motivated to create a hospitable environment for their users in order to incentivize engagement. This regulation involves both reflecting the norms of their users around speech as well as keeping as much speech as possible. Online platforms also self-regulate for reasons of social and corporate responsibility, which in turn reflect free speech norms.


She quotes Lawrence Lessig predicting that a “code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no doubt. But by whom, and with what values? That is the only choice we have left to make.”


And we’re not done making it. I think we will end up with a many-tiered approach, including:



Community standards that govern matters of acceptable and unacceptable behavior. I hope they are made with more community input.
Platform covenants that make warranties to users, the public, and government about what they will endeavor to deliver in a safe and hospitable environment, protecting users’ human rights.
Algorithmic means of identifying potentially violating behavior at scale.
Human appeals that operate like small claims courts.
High-level oversight boards that rule and advise on policy.
Regulators that hold companies accountable for the guarantees they make.
National internet courts that rule on questions of legality in takedowns in public, with due process. Companies should not be forced to judge legality.
Legacy courts to deal with matters of illegal behavior. Note that platforms often judge a complaint first against their terms of service and issue a takedown before reaching questions about illegality, meaning that the miscreants who engage in that illegal behavior are not reported to authorities. I expect that governments will complain platforms aren’t doing enough of their policing — and that platforms will complain that’s government’s job.

Numbers 1–5 occur on the private, company side; the rest must be the work of government. Klonick calls the platforms “the New Governors,” explaining that


online speech platforms sit between the state and speakers and publishers. They have the role of empowering both individual speakers and publishers … and their transnational private infrastructure tempers the power of the state to censor. These New Governors have profoundly equalized access to speech publication, centralized decentralized communities, opened vast new resources of communal knowledge, and created infinite ways to spread culture. Digital speech has created a global democratic culture, and the New Governors are the architects of the governance structure that runs it.


What we are seeking is a structure of checks and balances. We need to protect the human rights of citizens to speak and to be shielded from such behaviors as harassment, threat, and malign manipulation (whether by political or economic actors). We need to govern the power of the New Governors. We also need to protect the platforms from government censorship and legal harassment. That’s why we in America have Section 230.


Section 230 and ‘The Twenty-Six Words that Created the Internet’

We are having this debate at all because we have the “online speech platforms,” as Klonick calls them — and we have those platforms thanks to the protection given to technology companies as well as others (including old-fashioned publishers that go online) by Section 230, a law written by Oregon Sen. Ron Wyden (D) and former California Rep. Chris Cox (R) and passed in 1996 telecommunications reform. Jeff Kosseff wrote an excellent biography of the law that pays tribute to these 26 words in it:


No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.


Those words give online companies safe harbor from legal liability for what other people say on their sites and services. Without that protection, online site operators would have been motivated to cut off discussion and creativity by the public. Without 230, I doubt we would have Facebook, Twitter, Wikipedia, YouTube, Reddit, news comment sections, blog platforms, even blog comments. “The internet,” Kosseff writes, “would be little more than an electronic version of a traditional newspaper or TV station, with all the words, pictures, and videos provided by a company and little interaction among users.” Media might wish for that. I don’t.


In Wyden’s view, the 26 words give online companies not only this shield but also a sword: the power and freedom to moderate conversation on their sites and platforms. Before Section 230, a Prodigy case held that if an online proprietor moderated conversation and failed to catch something bad, the operator would be more liable than if it had not moderated at all. Section 230 reversed that so that online companies would be free to moderate without moderating perfectly — a necessity to encourage moderation at scale. Lately, Wyden has pushed the platforms to use their sword more.


In the debate on 230 on the House floor, Cox said his law “will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the internet….”


In his book, Kosseff takes us through the prehistory of 230 and why it was necessary, then the case law of how 230 has been tested again and again and, so far, survived.


But Section 230 is at risk from many quarters. From the far right, we hear Trump and his cultists whine that they are being discriminated against because their hateful disinformation (see: Infowars) is being taken down. From the left, we see liberals and media gang up on the platforms in a fit of what I see as moral panic to blame them for every ill in the public conversation (ignoring politicians’ and media’s fault). Thus they call for regulating and breaking up technology companies. In Europe, countries are holding the platforms — and their executives and potentially even their technologists — liable for what the public does through their technology. In other nations — China, Iran, Russia — governments are directly controlling the public conversation.


So Section 230 stands alone. It has suffered one slice in the form of the FOSTA/SESTA ban on online sex trafficking. In a visit to the Senate with the regulation working group I wrote about above, I heard a staffer warn that there could be further carve-outs regarding opioids, bullying, political extremism, and more. Meanwhile, the platforms themselves didn’t have the guts to testify in defense of 230 and against FOSTA/SESTA (who wants to seem to be on the other side of banning sex trafficking?). If these companies will not defend the internet, who will? No, Facebook and Google are not the internet. But what you do to them, you do to the net.


I worry for the future of the net and thus of the public conversation it enables. That is why I take so seriously the issues I outline above. If Section 230 is crippled; if the UK succeeds in demanding that Facebook ban undefined harmful but legal content; if Europe’s right to be forgotten expands; if France and Singapore lead to the spread of “fake news” laws that require platforms to adjudicate truth; if the authoritarian net of China and Iran continues to spread to Russia, Turkey, Hungary, the Philippines, and beyond; if …


If protections of the public conversation on the net are killed, then the public conversation will suffer and voices who could never be heard in big, old media and in big, old, top-down institutions like politics will be silenced again, which is precisely what those who used to control the conversation want. We’re in early days, friends. After five centuries of the Gutenberg era, society is just starting to relearn how to hold a conversation with itself. We need time, through fits and starts, good times and bad, to figure that out. We need our freedom protected.


Without online speech platforms and their protection and freedom, I do not think we would have had #metoo or #blacklivesmatter or #livingwhileblack. Just to see one example of what hashtags as platforms have enabled, please watch this brilliant talk by Baratunde Thurston and worry about what we could be missing.



None of this is simple and so I distrust all the politicians and columnists who think they have simple solutions: Just make Facebook kill this or Twitter that or make them pay or break them up. That’s simplistic, naive, dangerous, and destructive. This is hard. Democracy is hard.


The post Regulating the net is regulating us appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2019 08:11

April 25, 2019

A Crisis of Cognition





In journalism, we think our job is to “get the story.” We teach the skill of “knowing what a story is.” We call ourselves “storytellers.” We believe that through stories — or as we also like to say when feeling uppish, “narrative”— we attract and hold attention, impart facts in engaging fashion, and explain the world.





My greatest heresy to date — besides questioning paywalls as panacea — is to doubt the primacy of the story as journalistic form and to warn of the risk of valuing drama, character, and control over chaotic reality. Now I’ll dive deeper into my heretical hole and ask: What if the story as a form, by its nature, is often wrong? What if we cannot explain nearly as much as we think we can? What if our basis for understanding our world and the motives and behaviors of people in it is illusory? What would that mean for journalism and its role in society? I believe we need to fundamentally and radically reconsider our conceptions of journalism and I start doing that at the end of this post.









Alex Rosenberg, a philosopher of science at Duke, pulled this rug of storytelling out from under me with his new book How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories. In it, heargues that the human addiction to the story is an extension of our reliance on the theory of mind. That theory holds that in our brains, humans balance beliefs and desires to decide on action. The theory, he explains, springs from lessons we as humans learned on the veldt, where we would mind-read — that is, use available information about our environment and others’ goals and past actions to predict the behavior of the antelope that is our quarry; the lion we are competing with; and our fellow tribesmen with whom we either compete or must trust to collaborate. “Since mind readers share their target animals’ environments, they have some sensory access to what the target animals see, hear, smell, taste, and so on,” Rosenberg says.





Humans in the bush became proficient at predicting the immediate behavior of other animals and humans, which led their literate descendants to believe they could not only predict behavior in the now but also explain the past. Rosenberg questions historical narrative, pointing out that if we really could ascertain the motives of actors in the past with verifiable accuracy, there would not be so many books with dueling theories as to why the King or Kaiser did this or that. The theory of mind also fails when trying to predict human behavior ahead of time — just look at how awful political pundits are at foretelling elections. Rosenberg writes:





The progression from a (nearly) innate theory of mind to a fixation on stories — narrative — was made in only a few short steps. We went from explaining how and why we did things in the present, to explaining how and why we did things in the past, to explaining how and why others did things in the present, then the past, and finally to explaining how others did things with, to, against, and for still others.





Voilá narrative.





And we love narrative. “Neuroscientists have shown that hearing a story, especially a tension-filled one in which the protagonists’ emotions are involved, is followed by the release of pleasure-producing hormones such as oxytocin, which is also released during orgasm…” (Indeed, research showsthat oxytocin improves “mind-reading” in humans.) Rosenberg says later: “Narratives move us. In fact, they move entire nations.” (See: Edward Bernays Propaganda.)





But Rosenberg’s coup de grâce against the theory of mind — and the basis of his book — is that neuroscience cannot find a sequence in the brain that balances stored beliefs with desires to arrive at a behavior. He writes that “the theory of mind and neuroscientific theory turn out to be logically incompatible.” I will leave it to you to buy his book and read his detailed scientific explanation of meaning and memory, of neurons and content, of rats’ brains and humans’. For the sake of this brief provocation, suffice it to say that neuroscientists’ observation of the brain does not confirm the theory of mind, the fundamental belief about human behavior that informs our every speculation about motives and actions in the stories we create.





What, then, of the first draft of history?



If that is Rosenberg’s view of history, I wondered what his view would be of the first draft of history — journalism. So I emailed to ask him and he kindly responded, observing that journalists “keep asking the question ‘how did you feel about…’ that invites the interviewee to roll out the beliefs and desires that drove their actions.” He acknowledges that our business model drives us to attract large audiences “in the face of the public’s demands for a good story.” Indeed, Rosenberg himself admits he is a sucker for a good story; we all are.





So what do we turn to instead of the story? “My message isn’t that journalists have to work harder to dig out the real motives behind the actions they report,” Rosenberg emailed me. “It’s that they need to change their target and their approach to it. Stop trying to explain what people do as actions driven by motives, and start taking on major social trends and figure out how the structure of cultural variation and selection imposes outcomes.”





In a panel about the seduction of storytelling I organized at the International Journalism Festival in Perugia, I was asked to reread that last sentence of Rosenberg’s email three times, so boggling is it for us storytellers. Rosenberg is on one level saying that we journalists should focus on issues and trends over personalities and predictions — something friend Jay Rosen argues often. In that panel, Rosen said that the report, the discussion, and the investigation are more reliable units of journalism than the story and our skill is more verification than storytelling. But on a more foundational level, Rosenberg is warning in his email — as he does in his book — that society’s progress is a product of natural selection and that we are all subjects in a giant matrix of game theory. That is to say that journalists or historians cannot predict or explain human behavior based on motive or purpose but instead should analyze changes in society based on the harsh reality of natural selection and survival of the fittest: life as a nasty, brutish competition. Sounds about right, eh?





To put this worldview in greater context, Rosenberg says that Newton robbed us of our belief that the universe had purpose — divine purpose — and was instead ruled by laws of nature and science. Darwin did likewise regarding biology on earth, robbing evolution of grander purpose in favor of natural selection and survival of the fittest. Now, Rosenberg says, neuroscience robs us of our belief in our own purpose. “Neuroscience has shown that, despite their appearance, human behaviors aren’t really driven by purposes, ends, or goals,” he writes. Yes, we appear to have a goal when we choose one path versus another, but Rosenberg argues that decision could be determined by patterns in memory — experience or instinct — or rewards. “As in all the rest of the biological domain, there are no purposes, just a convincing illusion of purpose,” Rosenberg says. “Neuroscience is completing the scientific revolution by banishing purpose from the last domain where it’s still invoked to explain and predict.”





The more we know, the less we can explain



Let that last notion about banishing purpose from our lives sink into your epistemological guts, then I’ll deliver another swift kick, courtesy of my friend David Weinberger, coauthor of the seminal work of web culture from exactly 20 years ago this month, The Cluetrain Manifesto, and author of Small Pieces Loosely JoinedEverything is Miscellaneous, and Too Big to Know. His new book, Everyday Chaos, is out in May (on May 15 he and I will be discussing it in New York; you can reserve a seat at that link and preorder the book now).









In Everyday Chaos, Weinberger examines the implications of machine learning, artificial intelligence, and other data-fed and algorithmically driven means of predicting events and behaviors. Says Weinberger, even simple A/B testing “works without needing, or generating, a hypothesis about why it works.” In other words, data and formulae can predict human behavior more accurately than fellow humans can, relying as we do on our theory of mind and storytelling. These machines cannot be expected to always provide explanations; they sometimes simply predict what will happen without having to say why. So much for the fifth W of journalistic ledes. Weinberger writes:





Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else, all at once.





As we will see, machine learning is just one of many tools and strategies that have been increasingly bringing us face to face with the incomprehensible intricacy of our everyday world. But this benefit comes at a price: we need to give up our insistence on always understanding our world and how things happen in it.”





Yes, machine learning may enable us to better predict cancer or market movements or traffic accidents, saving time, money, even lives. Weinberger says: “Our new engines of prediction are able to make more accurate predictions and to make predictions in domains that we used to think were impervious to them because this new technology can handle far more data, constrained by fewer human expectations about how that data fits together, with more complex rules, more complex interdependencies, and more sensitivity to starting points.” But with that benefit, we need to give up on our belief in stories and the theory of mind, not to mention our reliance on always being able to uncover knowable laws. We need to give up on our expectation of explanation for why things happen — even for why we do things.





Returning to Rosenberg, he sent me another piece he wrote in which he said that artificial intelligence algorithms work like our brains, “employing a Darwinian learning algorithm and so do we.” But that process of testing possible outcomes before deciding on one does not bring insight or explanation. “When success is a matter of tinkering, trying anything and seeing what works, there is no scope for insight, no need for it.”





In all of this I see a coming crisis of cognition. If change and uncertainty have led us to the apparent crisis of civilization we are seeing today — with the powerful (white, male) incumbents fearful of their dethroning by alien man or machine — I shudder to think what happens to the public conversation when its fundamental grounding in the theory of mind and certainty of the neat narrative arc of the story is exploded.





I also shudder to think what becomes of media. Says Weinberger :





Why have we so insisted on turning complex histories into simple stories? Marshall McLuhan was right: the medium is the message. We shrank our ideas to fit on pages sewn in a sequence that we then glued between cardboard stops. Books are good at telling stories and bad at guiding us through knowledge that bursts out in every conceivable direction, as all knowledge does when we let it.





But now the medium of our daily experiences — the internet — has the capacity, the connections, and the engine needed to express the richly chaotic nature of the world.





Chaos is what journalism promises to tame. But journalism fails. It always has. The world is less explainable than we would like to admit.





Radical reformulation of journalism



Mind you, I’m not killing the story; it is too ingrained in literal DNA to extinguish. Let’s also be clear that the word “story” is overused in our field to refer to what should usually be called articles as well as topics.





I do, however, celebrate efforts to free journalism from the presumption of the story. This is why I am enthused about my current entrepreneurial student Elisabetta Tola’s efforts to demonstrate journalism in the scientific method. It’s why I am equally excited about Eve Pearlman’s efforts at Spaceship Mediato build journalism around the public conversation, not media’s content, as we teach at Newmark in Social Journalism. I am eager for more examples.





But Rosenberg and Weinberger inspire a more radical reformulation of journalism. Journalism requires a different starting point: not getting and writing stories to fill a Gutenberg-era product called a publication, not convincing ourselves and our public that we can summarize and explain their world in the neat confines of text, not merely saying what happened today or will tomorrow. Instead, I want to imagine a journalism that begins with the problems we see and reaches across disciplines to seek solutions. (You might expect me to turn to technology but, no, I am looking to academic fields of study that have much to teach us about the society we serve.) Thus a reimagined journalism would not act as gatekeeper but as bridge.





If, for example, we believe a key problem in society today is the demagogues’ demonization of The Other, then let us look to neuroscience for understanding of the instincts authoritarians exploit. See this article in Foreign Affairs by Stanford neuroscientist Robert Sapolsky about our responses to group identity and threat. “Our brains distinguish between in-group members and outsiders in a fraction of a second, and they encourage us to be kind to the former but hostile to the latter,” Sapolsky writes. “These biases are automatic and unconscious and emerge at astonishingly young ages.” But Sapolsky says we can realistically hope for change. “The Swedes,” he points out, “spent the seventeenth century rampaging through Europe; today they are, well, the Swedes.” He continues: “Although human biology makes the rapid, implicit formation of us-them dichotomies virtually inevitable, who counts as an outsider is not fixed. In fact, it can change in an instant.” Thus the question is, how do we make outsiders insiders? Or as I’ve been fond of putting it, how do we make strangers less strange? This might mean enabling the outsiders to tell their stories (you see, I’m not unalterably opposed to stories). It might mean educating one group about another’s circumstances. It might mean bringing strangers together to model peaceful behavior. It might mean trying to get people to like each other more than our stories. (How about oxytocin levels as a metric to replace page views? [I’m joking…. I think.])





To understand and reflect communities to each other, we can turn to anthropology with its discipline of observation and evidence, which does not — as news stories too often do — take one person as the exemplar for a large, odd group (for example, The New York Times teaching us that white nationalists, too, eat at Panera). In his survey, Anthropology: Why it Matters, Tim Ingold of the University of Aberdeen decrees, “Taking others seriously is the first rule of my kind of anthropology.” Just like journalists, anthropologists grapple with the concept of objectivity, of distance from subjects, of exploitation of their stories. Ingold rejects objectivity. His purpose “is not to interpret or explain the ways of others; not to put them in their place or consign them to the ‘already understood’. It is rather to share in their presence, to learn from their experiments in living, and to bring this experience to bear on our own imaginings of what human life could be like, its future conditions and possibilities.” Ingold echoes the great journalism teacher James Carey when he talks about the primacy not of conclusions but of conversation.





This is not to catalogue the diversity of human lifeways but to join the conversation. It is a conversation, moreover, in which all who join stand to be transformed. The aim of anthropology, in short, is to make a conversation of human life itself. This conversation is not just about the world…. It is the world. It is the one world we inhabit.





In a sense, journalists ask, “How do they live.” Ingold says the question the communities ask is, “How should we live?” Enter the verb “should” and we turn to philosophers and ethicists, who pose larger questions about how we are treating each other today, about the kind of society we want to build, about how we see ourselves in how we treat others. Perhaps the journalist’s job then could be to ask factions of society to reflect on their own behavior or to give those excluded from power the opportunity to reflect themselves. For this, we have disciplines devoted to African-American, Latinx, women’s, and LGBTQ studies to help.





Let us say the problem to attack is our epistemological crisis and alternative facts. We could look to cognitive science to understand how misinformation lodges in the brain; see this article by a professor in that field, Julian Matthews of Monash University. Of course, we also need to look to education to understand how to dislodge misinformation and propaganda and install reason and facts. See also this excellent review by Daniel Kreiss of three books about the 2016 election, inspiring various solutions: One book, Cyberwar, measures impact by the Russians (and a solution may be to judge American media for its complicity and vulnerability); another, Network Propaganda, argues the problem is Fox News et al (and proposes, as I have, the need to fund responsible conservative competition); the third, Identity Crisis, says the problem is not epistemology but identity — our ongoing American identity crisis regarding racism (to which, of course, there is no simple solution).





Another heresy of mine is debating the value of news literacy because it is too media-centric — if journalism needs a user manual, then the problem is probably journalism itself — and is perhaps aimed at the wrong population: the young. Weeks ago, I wrote about an NYU/Princeton study that found it’s not kids who are sharing disinformation online but instead people who look like me: old, white men. I thought about writing a book for them — Dear Grandpa — and as I outlined the idea, I realized that the problem isn’t Grandpa’s parsing of facts but instead his anger. How did this privileged white man become so mad? We probably know the answer: Fox News and talk radio. But what made him so vulnerable to manipulation? For this, we should turn to psychology. Then we might decide that what we really need is not stories about political fights but instead massive group therapy: journalism as couch.





I could go on — and will in the future. But you get the point. We have been too insular in journalism, looking to ourselves for solutions to the field’s problem and defining that problem too narrowly as finding ways to maintain what we have always done. That’s why I so welcome Rosenberg’s and Weinberger’s challenges to our ways of thinking about our most fundamental ideas of ourselves as storytellers and explainers. With no rug underneath us, we are forced to reconsider everything: what society needs, what journalism should do, what journalism is. To do that, we need to listen outside of ourselves, to the communities we serve (and especially those we haven’t served) and to disciplines other than our own — all those I mentioned above plus design, economics, sociology, data science, computer science, engineering, criminal justice (or rather, just justice), law, public policy, and others — each of which can help us reconsider society’s problems and goals from different perspectives. Then we can redefine journalism. What’s needed is radical thinking. I, for one, have not been radical enough. I will try harder.









If, perchance, you’ve not had enough of the topic, here’s video of that panel on the story at the International Journalism Festival.








The post A Crisis of Cognition appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2019 03:52

April 1, 2019

Proposals for Reasonable Technology Regulation and an Internet Court





I have seen the outlines of a regulatory and judicial regime for internet companies that begins to make sense to me. In it, platforms set and are held accountable for their standards and assurances while government is held to account for its job — enforcing the law — with the establishment of internet courts.





I have not been a fan of net regulation to date, for reasons I’ll enumerate below. Even Mark Zuckerberg is inviting regulation, though I don’t agree with all his desires (more on that, too, below). This is not to say that I oppose all regulation of the net; when there is evidence of demonstrable harm and consideration of the impact of the regulation itself — when there is good reason — I will favor it. I just have not yet seen a regulatory regime I could support.





Then I was asked to join a Transatlantic High-Level Working Group on Content Moderation and Freedom of Expression organized by former FCC commissioner Susan Ness under the auspices of Penn’s Annenberg Public Policy Center and the University of Amsterdam’s Institute for Information Law. At the first meeting, held in stately Ditchley Park (I slept in servants’ quarters), I heard constructive and creative ideas for monitored self-regulation and, intriguingly, a proposal for an internet court. What I’m about to describe is not a summary of the deliberations. Though discussion was harmonious and constructive, I don’t want to present this as a conclusion or as consensus from the group, only what most intrigued me. What I liked about what I’m about to outline is that it separates bad behavior (defined and dealt with by companies) from illegal behavior (which must be the province of courts) and enables public deliberation of new norms. Here’s the scenario:





A technology company sets forth a covenant with its users and authorities warranting what it will provide. Usually, this document obligates users to community standards to govern unwanted behavior and content. But this covenant should also obligate the company to assurances of what it will provide, above and beyond what the law requires. These covenants can vary by platform and nation. The community of users should be given opportunity for input to this covenant, which a regulator may approve.In the model of the U.S. Federal Trade Commission, liability arises for the company when it fails to meet the standards it has warranted. A regulator tracks the company’s performance and responds to complaints with the enforcement cudgel of fines. This monitoring requires the company to provide transparency into certain data so its performance can be monitored. As I see it, that in turn requires the government to give safe harbor to the company for sharing that data. Ideally, this safe harbor also enables companies to share data — with privacy protected — with researchers who can also monitor impact. (Post Cambridge Analytica, it has become even more impossible to pry data from tech companies.)



Now draw a hard, dark line between unwanted behavior and illegal acts. A participant in the meeting made an innovative proposal for the creation of national internet courts. (I wish I could credit this person but under Chatham House Rule, they wished to remain unnamed though gave me permission to write about the idea.) So:





Except in the most extreme matters (e.g., tracking, reporting, and eliminating terrorist incitement or child porn), a company’s responsibility to act on illegal content or behavior arises after the company has been notified by users or authorities. Once notified, the company is obligated to take action and can be held liable by the government for not responding appropriately.The company can refer any matters of dispute to an internet court, newly constituted under a nation’s laws with specially trained judges and systems of communication that enable it to operate with speed and scale. If the company is not sure what is illegal, the court should decide. If the object of the company’s actions — take-down or banning — wishes to appeal, the court will rule. The company will have representation in court and affected parties may as well.The participant who proposed this internet court argued, eloquently and persuasively, that the process of negotiating legal norms, which in matters online is now occurring inside private corporations, must occur instead in public, in courts, and with due process.The participant also proposed that the court would be funded by a specific fee or tax on the online companies. I suspect that the platforms would gladly pay if this got them out of the position of having to enforce vague laws with undue speed and with huge fines hanging over their heads.



That is a regulatory and legal regime — again, not a proposal, not a conclusion, only the highlights that impressed me — which strikes me as rational, putting responsibilities in the appropriate bodies; allowing various platforms and communities to be governed differently and appropriately for themselves; and giving companies the chance to operate positively before assuming malign intent. Note that our group’s remit was to look at disinformation, hate speech, and other unacceptable behavior alongside protection of freedom of expression and assembly and not at other issues, such as copyright — though especially after the disastrous Articles 11+13 in Europe’s new copyright legislation, that is a field that is crying for due process.





The discussion that led here was informed by very good papers written about current regulatory efforts and also by the experience of people from government, companies (most of the largest platforms were not represented by current executives), and academics. I was most interested in the experience of one European nation that is rather quietly trying an experiment in regulation with one of the platforms, in essence role-playing between government and a company in an effort to inform lawmakers before they write laws.





In the meeting, I was not alone in beginning every discussion urging that research must be commissioned to inform any legislative or regulatory efforts, gathering hard evidence and informing clear definitions of harm and impact.These days, interventions are being designed under presumptions that, for example, young people are unable to separate fact from falsity and are spreading the latter (this research says the real problem is not the kids but their grandpas); or that the internet has dealt us all into filter bubbles (thesestudies referenced by Oxford’s Rasmus Kleis Nielsen do not support that). To obtain that evidence, I’ll repeat that companies should be given safe harbor to share data — and should be expected to then do so — so we can study the reality of what is happening on the net.





At Ditchley, I also argued — to some disagreement, I’ll confess — that it would be a dangerous mistake to classify the internet as a medium and internet companies as publishers or utilities. Imagine if Facebook were declared to be both and then — as is being discussed, to my horror, on the American right — were subjected to equal-time regulation. Forcing Facebook to give presence and promotion to certain political views would then be tantamount to walking into the Guardian editor-in-chief’s office and requiring her to publish Nigel Farage. Thank God, I’m confident she wouldn’t. And thank our constitutional authors, we in the United States have (at least for now) a First Amendment that should forbid that. Rather than thinking of the net as a medium — and of what appears there as content — I urged the group (as I do to anyone who’ll read me here) to think of it instead as a mechanism for connections where conversation occurs. That public conversation, with new voices long ignored and finally heard, deserves protection. That is why I argue that the net is neither publisher nor utility but something new: the connection machine.





There were other interesting discussions in the meeting — for example, about whether to ban foreign interference in a nation’s elections and political discussion. That idea unraveled under examination as that could also prevent organizing international campaigns for, say, climate reform or democracy. There was also much discussion about the burden regulation puts on small companies — or larger companies in smaller countries — raising the barrier to entry and making big companies, which have the lawyers and technologists needed to deal with regulation, only bigger and more powerful.





Principles for legislation



It is critical that any discussion of legislative efforts begin at the level of principles rather than as a response to momentary panic or political point-scoring (in Europe, pols apparently think they can score votes by levying big fines on large, American companies; in America, certain pols are hoping to score by promising — without much reason, in my opinion — to break successful companies up).





According to many in the working group meeting, the best place to begin is with the Universal Declaration of Human Rights, namely:





Article 19.
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.





Article 20.
(1) Everyone has the right to freedom of peaceful assembly and association.
(2) No one may be compelled to belong to an association.





It is no easy matter to decide on other principles that should inform legislation. “Fake news” laws say that platforms must eradicate misinformation and disinformation. Truth is a terrible standard, for no one — especially not the platforms — wants to be its arbiter and making government that arbiter is a fast track to authoritarianism (see: China). Discerning truth from falsehood is a goal of the public conversation and it needs to flow freely if bumptiously to do that.





Civility seems an appealing standard but it is also troubling. To quote Joan Wallach Scott in the just-published Knowledge, Power, and Academic Freedom: “The long history of the notion of civility shows that it has everything to do with class, race, and power.” She quotes Michael Meranze arguing that “ultimately the call for civility is a demand that you not express anger; and if it was enforced it would suggest there is nothing to be angry about in the world.” Enforcement of civility also has a clear impact on freedom of expression. “Hence the English laws regulating language extended protection only to the person harmed by another’s words, never to the speaker,” explains Debora Shuger in Censorship & Cultural Sensibility: The Regulation of Language in Tudor-Stuart England. When I spoke with Yascha Mounk on his podcastabout this question, he urged holding onto civility, for he said one can call a nazi a nazi and still be civil. Germany’s NetzDG leans toward enforcement of civility by requiring platforms to take down not only hate speech but also “defamation or insult.” (Google reported 52,000 such complaints and took down 12,000 items as a result.) But again, sometimes, insult is warranted. I say that civility and civilization cannot be legislated.





Harm would be a decent standard if it were well-researched and clearly defined. But it has not been.





Anonymity is a dangerous standard, for requiring verified identity endangers the vulnerable in society, gives a tool of oppression to autocratic regimes, and is a risk to privacy.





Of course, there are many other working groups and many other convenings hashing over just these issues and well they should. The more we have open discussion with input from the public and not just lobbyists, the less likely that we will face more abominations like Articles 11+13. This report by Chris Middleton from the Westminster eForum presents more useful guidelines for making guidelines. For example, Daniel Dyball, UK executive director of the Internet Association,





proposed his own six-point wishlist for regulation: It should be targeted at specific harms using a risk based approach, he said; it should provide flexibility to adapt to changing technologies, services, and societal expectations; it should maintain the intermediary liability protections that enable the internet to deliver benefits to consumers, society, and the economy; it should be technically possible to implement in practice; it should provide clarity and certainty for consumers, citizens, and internet companies; and finally, it should recognise the distinction between public and private communications — an issue made more difficult by Facebook….





Middleton also quotes Victoria Nash of the Oxford Internet Institute, who argued for human rights as the guiding principle of any regulation and for a commitment to safe harbor to enable companies to take risks taken in good faith. “Well-balanced immunity or safe harbor are vital if we want responsible corporate behavior,” she said. She argued for minimizing the judgments companies must make in ruling on content. “Nash said she would prefer judgments that concentrate on illegal rather than ‘legal but harmful’ content.” She said that laws should encourage due process over haste. And she said systems should hold both companies and governments to account, adding: “I don’t have the belief that government will always act in the public interest.” Amen. Cue John Perry Barlow.





All of which is to say that regulating the internet is not and should not be easy. The implications and risks to innovation and ultimately democracy are huge. We must hold government to account for careful deliberation on well-researched evidence, for writing legislation with clearly enforceable standards, and for enforcing those laws.





Principles for company covenants



Proposing the covenants internet companies should make with their users, the public at large, and government — and what behavior they demand from and will enforce with users — could be a useful way to hold a discussion about what we expect from platforms.





Do we expect them to eliminate misinformation, incivility, or anonymity? See my discussion above. Then how about a safe space free of hatred? But we all hate nazis and should be free to say so. See Yascha Mounk’s argument. Then how about banning bigots? There’s a good start. But who’s a bigot? It took the platforms some time to decide that Alex Jones was one. They did so only after the public was so outraged by his behavior that companies were given cover to delete him. What happens in such cases, as I argue in this piece for The Atlantic, is that standards become emergent, bottom-up, after-the-fact, and unenforceable until after the invasion.





I urge you to read Facebook’s community standards. They’re actually pretty good and yet they certainly don’t solve the problem. Twitter has rules against abusive behavior but I constantly see complaints they are not adequately enforced.





This, I think, is why Mark Zuckerberg cried uncle and outright asked for regulation and enforcement from government. Government couldn’t figure out how to handle problems online so it outsourced its job to the companies. Now the companies want to send the task back to government. In response to Zuckerberg’s op-ed, Republican FCC Commissioner Brendan Carr pushed back again: “Facebook says it’s taking heat for the mistakes it makes in moderating content. So it calls for the government to police your speech for it. Outsourcing censorship to the government is not just a bad idea, it would violate the First Amendment. I’m a no.” Well, except if it’s government that forces Facebook to take action against speech then that is tantamount to government interference in speech and a violation of the First Amendment. The real problem is the quasi-legal nature of this fight: governments in Europe and the U.S. are ordering platforms to get rid of “harmful” speech without defining harm in law and without due process. It’s a game of hot potato and the potato is in midair. [Disclosure: I raised money for my school from Facebook but we are independent of it and I receive no money personally from any platform. Through various holdings and mutual funds, I probably own stock in most major platforms.]





Zuckerberg urges a “more standardized approach” regarding harmful content as well as privacy and definitions of political advertising. I well understand his desire to find consistency. He said: “I also believe a common global framework — rather than regulation that varies significantly by country and state — will ensure that the Internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protection.”





But the internet is fractured and nations and cultures are different. A recent paper by Kieron O’Hara and Wendy Hall says the net is already split into four or five pieces: the open net of Silicon Valley, the commercial web of American business, the regulated “bourgeois internet” of Europe, the authoritarian internet of China, and the misinformation internet of Russia and North Korea.





I worry about Zuckerberg’s call for global regulation for I fear that the net will be run according to the lowest common denominator of freedom and the highest watermark of regulation.





None of this is easy and neither companies nor governments — nor us as the public — can shirk our duties to research, discern, debate, and decide on the kind of internet and society we want to build. This is a long, arduous process of trial and error and of negotiation of our new laws and norms. There’s no quick detour around it. That’s why I want to see frameworks that are designed to include a process of discussion and negotiation. That’s why I am warming to the structure I outlined above, which allows for community input into community standards and requires legislative consideration and judicial due process from government.





What I don’t want



I’ve been highly critical of much regulation to date and though I’ve written about that elsewhere, I will include my objections here, for context. In my view, attempts to regulate the net to date too often:





Spring from moral panic over evidence (see Germany’s NetzDG hate-speech law);Are designed for protectionism over innovation (see Articles 11+13 of Europe’s horrendous new copyright law and it’s predecessors, Germany’s Leistungsschutzrecht or ancillary copyright and Spain’s link tax);Imperil freedom of expression and knowledge (see 11+13, the right to be forgotten, and the French and Singaporean fake news laws, which make platforms deciders of truth);Are conceived under vague and unenforceable standards (see where the UK is headed against “harmful content” its Commons Report on Disinformation and “fake news”);Transfer government authority and obligations to corporations, which now act in private and without due process as legislature, police, judge, jury, jailer, and censor (see the right to be forgotten and NetzDG);Result in misallocation of societal resources (Facebook hired, by latest count, 30,000 — up from 20,000 — monitors looking for hate while America has fewer than 30,000 newspaper reporters looking for corruption);Fall prey to the law of unintended consequences: making companies more responsible makes them more powerful (see GDPR and many of the rest).



And newly proposed regulation gets even worse with recent suggestions to require government permits for live streaming or to mandate that platforms vet everything that’s posted.





If this legislative juggernaut — and the moral panic that fuels it — are not slowed, I fear for the future of the net. That is why I think it is important to discuss regulatory regimes that will first do no harm.








The post Proposals for Reasonable Technology Regulation and an Internet Court appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on April 01, 2019 17:09

March 3, 2019

Media Education and Change

Lately I’ve been scolding myself that I have not been radical enough — yes, me, not nearly radical enough — about rethinking journalism in our still-emerging new reality of a connected world. And if journalism requires rebuilding, then so does journalism education and all of media education.





Every fall, when I am lucky enough to talk with our entire incoming class at the Newmark J-school, I tell them that they are the ones who must reinvent journalism and media; they should learn what we teach them and then question it all to find better ways. If media will be rethought and rebuilt from the ashes, what principles might govern how we prepare our students to become authors of that change? For some discussions I’ve been having recently, I’ve been thinking about all this and so, as is my habit, I’d like to think out loud and learn from you. I’ll start by outlining a few principles that are informing my thinking and then briefly discuss how this might affect various sectors of media education:





Listen first. The net is not a medium. It is a means of connection: connecting people with each other, people with information, and information with information. It enables conversation. That conversation is the collective deliberation of a democracy. Our first duty now is to teach students to use the tools the net brings them to listen before they create; to observe communities and markets and their needs and desires; to seek out communities they have not known; to empathize with those communities; to reflect what they learn back to the public so they check themselves; to collaborate with the public; to serve truth, especially when uncomfortable. No sector of media listens well — they think they do, but they don’t.Champion diversity. Now that most anyone — everyone who’s connected — can speak, new voices that were never represented in media can at last be heard. That is what is scaring the old people in power, leading to the reactionary rise of Trumpism, Brexit, and many of our racist and nationalist ills today. At Newmark, diversity is the soul of our institution and its mission. A colleague of mine, Jenny Choi, recently wrote an eloquent note about the value of our students and their wide variety of lived experiences. “They are the future drivers of trust in a journalism that holds true to its core values as a public service,” she wrote. Over the years, editors — and professors — have been known to tell reporters and students that stories about their own communities aren’t big enough because they don’t appeal to everyone, to the mass. That’s wrong. We need to build curriculum that values their experience. I’ve learned that as an institution, we need to serve diversity in the field at three levels: staffing (recruiting a diverse student body), leadership (at Newmark, we are starting a new program in News Innovation and Leadership, which will require ongoing mentorship and support for people who have not had the opportunity to lead), and ownership (thus entrepreneurial journalism).Death to the mass. All sectors of media are having great difficulty breaking themselves of their habit of selling to the mass. Our products were one-size-fits-all; our profits depended on scale. But now the net kills the mass as an idea and as a business strategy. Media must know and serve people as individuals and members of communities. Recently I met a newspaper executive who’d just come from the music industry, where he said companies finally learned that smaller acts — which had been seen as failures supported by the blockbusters — are now the core of the business, for those artists have loyal communities that add up to scale. Thinking about our work in terms of communities-as-society rather than as mass society will have radical impact on what we do.Service over product. So long as we continue to teach media as the creation of a product that can be bought, sold, and controlled, our students will miss the greater opportunity to be of service to a public. It is in service that we will build value.Service as our ethic. We need to reconsider the ethical principles and standards of all sectors of media around these ideas of connection, conversation, community, collaboration, diversity, impact, service, responsibility, empowerment. We need to ask how we are helping communities improve their lots in life. We need to convene communities in conflict into civil, informed, and productive conversation (that is my new working definition of journalism and a mission all media and internet companies should share). We need to work transparently and set our standards in public, with the public. We need to be answerable and accountable to those communities, measuring our success and our value against their standards and needs over ours.Be responsible stewards. I came to teach journalism students the business of journalism — at Newmark we call it entrepreneurial journalism — because we need to make them responsible leaders who will set sustainable strategies for the future of media. They need to learn how to create value and earn reward for it; profit is not a sin. Our creative graduates should sit at the same table with business executives in the industry; how do we equip them to do that?Teach change. In media education, this has tended to mean teaching students to teach themselves how to use new tools as they arrive, which is important. But, of course, it is also vital that we teach students to change our industry, to innovate and invent, to address problems with solutions, to find opportunity in disruption, to be leaders. I don’t mean to teach them PowerPoint cant about change management and design thinking. I want them to challenge us with radical new ideas that turn each sector of media on its head. This is what I mean when I say I have not been radical enough. Their ideas could mean such heresy as throwing out the story as our essential form (for example, one of our entrepreneurial students, Elisabetta Tola, now is looking at bringing the scientific method to journalism). It could mean building an enterprise on collaboration with communities (Wikipedia showed what’s possible but where are the copycats?). It could mean lobbying for and then creating systems of extreme transparency in government and business. I don’t know what all it could mean.Reach across disciplines. Since I started teaching, I’ve heard academics and administrators from countless institutions salute the flag of interdisciplinary collaboration. To be honest, most us aren’t good at it. I haven’t been. I believe we in media must reach out to other disciplines so we can learn from their expertise as they help us reimagine media: 
 — Anthropology relies on a discipline of observation and evidence we could use in media. (My favorite session in Social Journalism every year is the one to which my colleague, Carrie Brown, invites in an anthropologist to teach journalists how to observe.) 
 — Psychology is a critical field especially today, as emotions and anger prove to have more impact on the public conversation than mere facts. Maybe we don’t need media literacy so much as we need group therapy. 
 — Economics, sociology and the other social sciences also study group behavior. 
 — Marketing has a discipline of metrics and measurement we could learn from. 
 — Education is a critical skill if we want to teach the public things they need to know for their own lives and things they need to know to manage their communities. 
 — The sciences can teach us the scientific method, emphasizing, as media should, evidence over narrative.
 — Computer sciences are critical not just for the disruption they cause and the tools they offer. Data science and machine learning have much to teach us about new sources of information and new ways to find value in it. We can also work together on the ever-greater challenge of knowing our world. My friend the philosopher David Weinberger, author of  Too Big to Know , has a brilliant and provocative new book coming out called  Everyday Chaos  in which he examines the paradox of the connected data age, in which knowing more makes the world more unknowable. He writes:



Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else, all at once.





As we will see, machine learning is just one of many tools and strategies that have been increasingly bringing us face to face with the incomprehensible intricacy of our everyday world. But this benefit comes at a price: we need to give up our insistence on always understanding our world and how things happen in it.





That conception is antithetical to the warranty media make that they can explain the world in a story. How do we build media for a world in which complexity becomes only more apparent?









In journalism, at the Newmark J-school, we’ve tried to implement various of these principles and are working on others. Social Journalism, the new degree we started, is built on the idea of journalism in service to the conversation among communities. The need to teach responsible stewardship is what led to the Entrepreneurial Journalism program. Our new program in News Innovation and Leadership will — in my hidden agenda — embed radicals, rebuilders, and diverse leaders at the top of media companies. These new programs are meant to infuse their revolutionary goodness into the entire school and curriculum. Since the start, we’ve taught all students all media and our J+ continuing education program helps them refresh those skills (we call this our 100,000-mile guarantee). We’re just beginning to make good connections across our university into other disciplines; personally, I want to do much more of that.





Advertising will require reinvention as well. Here I outlined my worries about the commodification of media with volume-based, attention-based, mass-market advertising falling into the abyss in an abundance-based economy. Advertising is a necessity — for marketers and for media — but it has to be rebuilt around new imperatives to establish direct relationships of trust with customers who can be heard and must be respected. Programmatic advertising, microtargeting, retargeting, influencers, recirculation, and native are all crude, beginning attempts to exploit change. Tomorrow’s advertising graduates need to come up with new ways to listen to customers’ needs and desires: advertising as feedback loop, not as megaphone to the masses. They need to do more to put the customer in control of the experience of media, including data gathering, personalization, and commerce. They will need to establish new standards of responsibility about the use of data and privacy and the behaviors their industry values and incents (see: clickbait). How can we build the support of quality media into the ethos of advertising?





As for public relations: A decade ago, when I wrote What Would Google Do?, the advertising sage Rishad Tobaccowala speculated that PR must become the voice of the market to the company rather than of the company to the market. That brings the advertising and PR of the future closer together (or in closer conflict). By logical extension, Rishad’s dictum also means that the best PR company will fire clients that don’t listen to and respect their customers by involving them earlier in the chain of a product’s design and even a company’s strategy. An ethical PR company will refuse to countenance lies on clients’ behalf. This PR won’t just survey consumers but will teach companies how to build honest relationships with customers as people.





And broadcasting: I think I began to discern the fate of one-way media at Vidcon, where I saw what that music executive (above) told me come to life in countless communities built on real and empathetic relationships between creators and their fans. As I’ve written before, Vidcon taught me that we in nonfiction media can serve the public by creating media as social tokens, which people can use to enrich their own conversations with facts, ideas, help, and diverse voices. At the same time, fictional media must — especially today — take greater responsibility to challenge the public to a better expression of itself. Years ago, Will & Grace (and many shows before and after) made Americans realize they all knew and loved someone gay; it played its part in challenging the the closeting of LGBTQ Americans. Today, we need fictional media that makes strangers less strange.





As I said above, we need the study of communications (I refuse to call it mass communications) more than ever — and what a magnificent time to be a researcher examining and trying to understand the change overtaking every aspect of media. At Newmark’s Tow-Knight Center, I hope to do more to bring researchers together with technology companies so we can bring evidence to what are now mostly polemical debates about the state of social media and society. I just came from the UK and a working group meeting on net regulation (more on that another day) where I saw an urgent need for government to give safe harbor to technology companies to share data for such study.





At that meeting of tech, government, and media people, I fought — as I always do — against classifying the internet as a medium and internet companies as media companies when they are instead something entirely new. But the discussion made me think that in one sense, I’ll go along with including the internet inside media: I’d like internet studies to be part of the discipline of communications studies, with many new centers to embed the study of the net into everything we teach. What a frontier!





Or another way to look at this is that media studies could be subsumed into whatever we will call internet studies, first because it is ever more ridiculous to cut up media into silos and then stitch them back together as “multi-media” (can we retire the term already?) and second because all media are now internet media. Media are becoming a subset of the net and everything it represents: connections, conversation, data, intelligence. Does it make sense to separate what we used to call media — printed and recorded objects — from this new, connected reality?









There are so many exciting things going on in media education today. We — that is, my colleagues at Newmark — get to teach and develop social video, AR, VR, drone reporting, podcasting, data journalism, comedy as journalism, and more . I’ve also been trying to develop ideas like restructuring media curriculum around skills transcripts and providing genius bars for students to better personalize education, especially in tools and skills. I wish we were farther ahead in understanding how to use the net itself in distance and collaborative learning. All that is exciting and challenging, but I see that as mainly tactical.





Where I want to challenge myself is on the strategic level: How do we empower the generation we teach now and next to challenge all our assumptions that got us here, to save media by reinventing it, to shock and delight?





My greatest joy at Newmark is learning from the students. In Social Journalism, for example, students taught me I was wrong to send them off to find a singular community to serve; every one of them showed me how their journalism is needed where communities — plural — interact: journalism at the points of friction. They taught me the differences between externally focused journalism (informing the world about a community, as we’ve always done) and internally focused journalism (meeting a community’s information needs, as we can do now). I watched them learn that when they first observe, listen to, and build relationships with communities, they leave their notebooks and cameras — the tools of the mediator — behind, for the goal is not gathering quotes from instead gaining understanding and trust. When I still lecture them it’s about the past as context, challenging them to decide what they should preserve and what they should break so they can build what’s new.


The post Media Education and Change appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on March 03, 2019 16:12

February 24, 2019

Europe Against the Net





I’ve spent a worrisome weekend reading three documents from Europe about regulating the net:





The revived, revised, and worsened Articles 11 and 13 of the European Copyright Directive and Julia Reda’s devastating review of the impact.The Cairncross Review of the state of journalism and the net in the UK.The House of Commons Digital, Culture, Media, and Sport Committee Disinformation and ‘Fake News’ report.



In all this, I see danger for the net and its freedoms posed by corporate protectionism and a rising moral panic about technology. One at a time:





Articles 11 & 13: Protectionism gone mad



Article 11 is the so-called link tax, the bastard son of the German Leistungsschutzrechtor ancillary copyright that publishers tried to use to force Google to pay for snippets. They failed. They’re trying again. Reda, a member of the European Parliament, details the dangers:





Reproducing more than “single words or very short extracts” of news stories will require a licence. That will likely cover many of the snippets commonly shown alongside links today in order to give you an idea of what they lead to….





No exceptionsare made even for services run by individuals, small companies or non-profits, which probably includes any monetised blogs or websites.





European journalists protest that this will serve media corporations, not journalists. Absolutely.





But the danger to free speech, to the public conversation, and to facts and evidence are greater. Journalism and the academe have long depended on the ability to quote — at length — source material to then challenge or expand upon or explain it. This legislation begins to make versions of that act illegal. You’d have to pay a license to a news property to quote it. Nevermind that 99.9 percent of journalism quotes others. The results: Links become blind alleys sending you to god-knows-what dark holes exploited by spammers and conspiracy theories. News sites lose audience and impact (witness how a link tax forced Google News out of Spain). Even bloggers like me could be restricted from quoting others as I did above, killing the web’s magnificent ability to foster conversation with substance.





Why do this? Because publishers think they can use their clout to get legislators to bully the platforms into paying them for their “content,” refusing to come to grips with the fact that the real value now is in the audience the platforms send to the publishers. It is corporate protectionism born of political capital. It is corrupt and corrupting of the net. It is a crime.





Article 13 is roughly Europe’s version of the SOPA/PIPA fight in the U.S.: protectionism on behalf of entertainment media companies. It requires sites where users might post material —isn’t that every interactive site on the net ?— to “preemptively buy licenses for anything that users may possibly upload,” in Reda’s explanation. They will also have to deploy upload filters — which are expensive to operate and notoriously full of false positives — to detect anything that is not licensed. The net: Sites will not allow anyone to post any media that could possibly come from anywhere.





So we won’t be able to quote or adapt. Death to the meme. Yes, there are exceptions for criticism, but as Lawrence Lessig famously said “fair use is the right to hire a lawyer.” This legislation attempts to kill what the net finally brought to society: diverse and open conversation.





Cairncross Review: Protecting journalism as it was



The UK dispatched Dame Frances Cairncross, a former journalist and economist, to review the imperiled state of news and she returned with a long and well-intentioned but out-of-date document. A number of observations:





She fails — along with many others — to define quality journalism. “Ultimately, ‘high quality journalism’ is a subjective concept that depends neither solely on the audience nor the news provider. It must be truthful and comprehensive and should ideally — but not necessarily — be edited. You know it when you see it….” (Just like porn, but porn’s easier.) Thus she cannot define the very thing her report strives to defend. A related frustration: She doesn’t very much criticize the state of journalism or the reasons why trust in it is foundering, only noting its fall.I worry greatly about her conclusion that “intervention may be needed to determine what, and how, news is presented online.” So you can’t define quality but you’re going to regulate how platforms present it? Oh, the platforms are trying to understand quality in news. (Disclosure: I’m working on just  such a project , funded by but independent of Facebook.) But the solutions are not obvious. Cairncross wants the platforms to have an obligation “to nudge people towards reading news of high quality” and even to impose quotas for quality news on the platforms. Doesn’t that make the platforms the editors? Is that what editors really want? Elsewhere in the report, she argues that “this task is too important to leave entirely to the judgment of commercial entities.” But BBC aside, that is where the task of news lies today: in commercial entities. Bottom line: I worry about *any* government intervention in speech and especially in journalism.She rightly focuses less on national publications and more on the loss of what she calls “public interest news,” which really means local reporting on government. Agreed. She also glances by the paradox that public-interest news “is often of limited interest to the public.” Well, then, I wish she had looked at the problem and opportunity from the perspective of what the net makes possible. Why not start with new standards to require radical transparency of government, making every piece of legislation, every report, every budget public? There have been pioneering projects in the UK to do just that. That would make the task of any journalist more efficient and it would enable collaborative effort by the community: citizens, librarians, teachers, classes…. She wants a government fund to pay for innovations in this arena. Fine, then be truly innovative. She further calls for the creation of an Institute for Public Interest News. Do we need another such organization? Journalism has so many.She explores a VAT tax break for subscriptions to online publications. Sounds OK, but I worry that this would motivate more publications to put up paywalls, which will further redline quality journalism for those who can afford it.She often talked about “the unbalanced relationship between publishers and online platforms.” This assumes that there is some natural balance, some stasis that can be reestablished, as if history should be our only guide. No, life changed with the internet.She recommends that the platforms be required to set out codes of conduct that would be overseen by a regulator “with powers to insist on compliance.” She wants the platforms to commit “not to index more than a certain amount of a publisher’s content without an explicit agreement.” First, robots.txt and such already put that in publishers’ control. Second, Cairncross acknowledges that links from platforms are beneficial. She worries about — but does not define — too much linking. I see a slippery slope to Article 11 (above) and, really, so does Cairncross: “There are grounds for worrying that the implementation of Article 11 in the EU may backfire and restrict access to news.” In her code of conduct, platforms should not impose their ad platforms on publishers — but if publishers want revenue from the platforms they pretty much have to. She wants platforms to give early warnings of changes in algorithms but that will be spammed. She wants transparency of advertising terms (what other industries negotiate in public?).Cairncross complains that “most newspapers have lacked the skills and resources to make good use of data on their readers” and she wants the platforms to share user data with publishers. I agree heartily. This is why I worry that another European regulatory regime — GDPR — makes that nigh unto impossible.She wants a study of the competitive landscape around advertising. Yes, fine. Note, thought, that advertising is becoming less of a force in publishers’ business plans by the day.Good news: She rejects direct state support for journalism because “the effect may be to undermine trust in the press still further, at a time when it needs rebuilding.” She won’t endorse throttling the BBC’s digital efforts just because commercial publishers resent the competition. She sees danger in giving the publishing industry an antitrust exception to negotiate with the platforms (as is also being proposed in the U.S.) because that likely could lead to higher prices. And she thinks government should help publishers adapt by “encouraging the development and distribution of new technologies and business models.” OK, but what publishers and which technologies and models? If we knew which ones would work, we’d already be using them.Finally, I note a subtle paternalism in the report. “The stories people want to read may not always be the ones they ought to read in order to ensure that a democracy can hold its public servants properly to account.” Or the news people need in their lives might not be the news that news organizations are reporting. Also: Poor people — who would be cut off by paywalls — “are not just more likely to have lower levels of literacy than the better-off; their digital skills also tend to be lower.” Class distinctions never end.



It’s not a bad report. It is cautious. But it’s also not visionary, not daring to imagine a new journalism for a new society. That is what is really needed.





The Commons report: Finding fault



The Digital, Culture, Media and Sport Committee is famously the body Mark Zuckerberg refused to testify before. And, boy, are they pissed. Most of this report is an indictment of Facebook on many sins, most notably Cambridge Analytica. For the purposes of this post, about possible regulation, I won’t indulge in further prosecuting or defending the case against Facebook (see my broader critique of the company’s culture here). What interests me in this case is the set of committee recommendations that could have an impact on the net, including our net outside of the UK.





The committee frets — properly — over malicious impact of Brexit. And where did much of the disinformation that led to that disaster come from? From politicians: Nigel Farage, Boris Johnson, et al. This committee, headed by a conservative, makes no mention of colleagues. As with the Cairncross report, why not start at home and ask what government needs to do to improve the state of its contribution to the information ecosystem? A few more notes:





Just as Cairncross has trouble defining quality journalism, the Commons committee has trouble defining the harm it sees everywhere on the internet. It puts off that critical and specific task to an upcoming Online Harms white paper from the government. (Will there also be an Online Benefits white paper?) The committee calls for holding social media companies — “which is not necessarily either a ‘platform’ or a ‘publisher’,” the report cryptically says — liable for “content identified as harmful after it has been posted by users.” The committee then goes much farther, threatening not just tech companies but technologists. My emphasis: “If tech companies (including technological engineers involved in creating the software for the companies) are found to have failed to meet their obligations under such a Code [of Ethics], and not acted against the distribution of harmful and illegal content, the independent regulator should have the ability to launch legal proceedings against them, with the prospect of large fines being administered….” Them’s fightin’ words, demonizing not just the technology and the technology company but the technologist.Again and again in reading the committee’s report, I wrote in the margin “China” or “Iran,” wondering how the precedents and tools wished for here could be used by authoritarian regimes to control speech on the net. For example: “There is now an urgent need to establish independent regulation. We believe that a compulsory Code of Ethics should be established, overseen by an independent regulator, setting out what constitutes harmful content.” How — except in the details — does that differ from China deciding what is harmful to the minds of the masses? Do we really believe that a piece of “harmful content” can change the behavior of a citizen for the worse without many other underlying causes? Who knows best for those citizens? The state? Editors? Technologists? Or citizens themselves? The committee notes — with apparent approval — a new French law that “allows judges to order the immediate removal of online articles that they decide constitute disinformation.” All this sounds authoritarian to me and antithetical to the respect and freedom the net gives people.The committee expands the definition of personal data — which, under GDPR, is already ludicrously broad, to include, for example, your IP address. It wants to include “inferred data.” I hate to think what that could do to the discipline of machine learning and artificial intelligence — to the patterns and inferences that will compose patterns discerned and knowledge produced by machines.The committee wants to impose a 2% “digital services tax on UK revenues of big technology companies.” On what basis, besides vendetta against big (American) companies?The Information Commissioner told the committee that “Facebook needs to significantly change its business model and its practices to maintain trust.” How often does government get into the nitty-gritty of companies’ business models? And let’s be clear: The problem with Facebook’s business model — click-based, volume-based, attention-based advertising — is precisely what drove media into the abyss of mistrust. So should the government tell media to change its business model? They wouldn’t dare.The report worries about the “pernicious nature of micro-targeted political adverts” and quotes the Coalition for Reform in Political Advertising recommending that “all factual claims used in political ads be pre-cleared; an existing or new body should have the power to regulate political advertising content.” So government in power would clear the content of ads of challengers? What could possibly go wrong? And micro-targeting of one sort or another is also what enables small communities with specific interests to find each other and organize. Give up your presumptions of the mass.The report argues “there needs to be absolute transparency of online political campaigning.” I agree. Facebook, under pressure, created a searchable database of political ads. I think Facebook should do more and make targeting data public. And I think every — every — other sector of media should match Facebook. Having said that, I still think we need to be careful about setting precedents that might not work so well in countries like, say, Hungry or Turkey, where complete transparency in political advertising and activism could lead to danger for opponents of authoritarian regimes.The committee, like Cairncross, expresses affection for eliminating VAT taxes on digital subscriptions. “This would eliminate the false incentive for news companies against developing more paid-for digital services.” Who said what is the true or false business model? I repeat my concern that government meddling in subscription models could have a deleterious impact on news for the public at large, especially the poor. It would also put more news behind paywalls, with less audience, resulting in less impact from it. (A hidden agenda, perhaps?)“The Government should put pressure on social media companies to publicize any instances of disinformation,” the committee urges. OK. But define “disinformation.” You’ll find it just as challenging as defining “quality news” and “harm.”The committee, like Cairncross, salutes the flag of media literacy. I remain dubious.And the committee, like Cairncross, sometimes reveals its condescension. “Some believe that friction should be reintroduced into the online experience, by both tech companies and by individual users themselves, in order to recognize the need to pause and think before generating or consuming content.” They go so far as to propose that this friction could include “the ability to share a post or a comment, only if the sharer writes about the post; the option to share a post only when it has been read in its entirety.” Oh, for God’s sake: How about politicians pausing and thinking before they speak, creating the hell that is Brexit or Trump?



In the end, I fear all this is hubris: to think that we know what the internet is and what its impact will be before we dare to define and limit the opportunities it presents. I fear the paternalistic unto authoritarian worldview that those with power know better than those without. I fear the unintended — and intended — consequences of all this regulation and protectionism. I trust the public to figure it out eventually. We figured out printing and steam power and the telegraph and radio and television. We will figure out the internet if given half a chance.





And I didn’t even begin to examine what they’re up to in Australia…


The post Europe Against the Net appeared first on BuzzMachine.


 •  0 comments  •  flag
Share on Twitter
Published on February 24, 2019 11:28

Jeff Jarvis's Blog

Jeff Jarvis
Jeff Jarvis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Jeff Jarvis's blog with rss.