Jeff Jarvis's Blog, page 4

January 18, 2024

Make Bell Labs an internet museum

I wrote an op-ed for NJ.com and the Star-Ledger in New Jersey proposing that the soon-empty Bell Labs should become a Museum and School of the Internet. Here, for those outside the Garden State, is the text:

Bell Labs, the historic headwaters of so many inventions that now define our digital age, is closing in Murray Hill, its latest owners moving to more modern headquarters in New Brunswick. The Labs should be preserved as a historic site and more. I propose that Bell Labs be opened to the public as a museum and school of the internet.

The internet would not be possible without the technologies forged at Bell Labs: the transistor, the laser, information theory, Unix, communications satellites, fiber optics, advances in chip design, cellular phones, compression, microphones, talkies, the first digital art, and artificial intelligence — not to mention, of course, many advances in networks and the telephone, including the precursor to the device we all carry and communicate with today: the Picturephone, displayed as a futuristic fantasy at the 1964 World’s Fair.

There is no museum of the internet. Silicon Valley has its Computer History Museum. New York has museums for television and the moving image. Massachusetts boasts a charming Museum of Printing. Search Google for a museum of the internet and you’ll find amusing digital artifacts, but nowhere to immerse oneself in and study this immensely impactful institution in society.

Where better to house a museum devoted to the internet than New Jersey, home not only of Bell Labs but also at one time the headquarters of the communications empire, AT&T, our Ma Bell?

I remember taking a field trip to Bell Labs soon after this web site, NJ.com, started in 1995. I was an executive of NJ.com’s parent company, Advance. My fellow editors and I felt we were on the sharp edge of the future in bringing news online.

We thought that earned us kinship with the invention of that future that went on at Bell Labs, so we arranged a visit to the awe-inspiring building designed by Stephen F. Voorhees and opened in 1941. The halls were haunted with genius: lab after lab with benches and blackboards and history within. We must not lose that history.

We also must not lose the history of the internet as it passes us by in present tense. In researching my book, “The Gutenberg Parenthesis: The Age of Print and its Lessons for the Age of the Internet,” I was shocked to discover that there was not a discipline devoted to studying the history and influence of print and the book until Elizabeth Eisenstein wrote her seminal work, “The Printing Press as an Agent of Change,” in 1979, a half-millennium after Gutenberg. We must not wait so long to preserve memories and study the importance of the net in our lives.

The old Bell Labs could be more than a museum, preserving and explaining the advances that led to the internet. It could be a school. After leaving Advance in 2006, I became a journalism professor at CUNY’s Newmark School of Journalism, from which I am retiring.

I am less interested now in studying journalism than in the greater, all-enveloping subject: the internet. My dream is to start a new educational program in Internet Studies, to bring the humanities and social sciences to research the internet, for it is much more than a technology; it is a human network that reflects both human accomplishment and human failure.

Imagine if Bell Labs were a place where scholars and students in many disciplines — technologies, yes, but also anthropology, sociology, psychology, history, ethics, economics, community studies, design — could gather to teach and learn, discuss and research.

Imagine, too, if a New Jersey university could use the space for classes and events.

There is a model for this in New Jersey in what Montclair State University is doing in Paterson, developing and operating a museum devoted to the history of Negro League baseball in the historic Hinchcliffe Stadium. This is the kind of university-community collaboration that could enrich the space of Bell Labs with energy and life.

There is some delicious irony in proposing that the internet be memorialized in what was once an AT&T facility, for the old telephone company resisted the arrival of the internet, hoping we would pay by the minute for long-distance calls forever.

In 1997, David Isenberg, a 12-year veteran of Bell Labs, wrote an infamous memo telling his bosses they were wrong to build intelligent networks and should instead learn the value of the stupid network that anyone could connect to: the internet.

Isenberg’s web site says the memo “was received with acclaim everywhere in the global telecommunications community with one exception — at AT&T itself! So Isenberg left AT&T in 1998.”

How wonderful if, in the end, Bell Labs could claim to become a forever home for that network that has changed the world.

The post Make Bell Labs an internet museum appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on January 18, 2024 07:35

January 11, 2024

In the echo chamber

Well, that was surreal. I testified in a hearing about AI and the future of journalism held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Here is my written testimony and here’s the Reader’s Digest version in my opening remarks:

It was a privilege and honor to be invited to air my views on technology and the news. I went in knowing I had a role to play, as the odd man out. The other witnesses were lobbyists for the newspaper/magazine and broadcast industries and the CEO of a major magazine company. The staff knew I would present an alternative perspective. My fellow panelists noted before we sat down — nicely — that they disagreed with my written testimony. Job done. There was little opportunity to disagree in the hearing, for one speaks only when spoken to.

What struck me about the experience is not surprising: They call the internet an echo chamber. But, of course, there’s no greater echo chamber than Congress: lobbyists and legislators agreeing with each other about the laws they write and promote together. That’s what I witnessed in the hearing in a few key areas:

Licensing: The industry people and the politicians all took as gospel the idea that AI companies should have to license and pay for every bit of media content they use. 

I disagree. I draw the analogy to what happened when radio started. Newspapers tried everything to keep radio out of news. In the end, to this day, radio rips and reads newspapers, taking in and repurposing information. That’s to the benefit of an informed society.

Why shouldn’t AI have the same right? I ask. Some have objected to my metaphor: Yes, I know, AI is a program and the machine doesn’t read or learn or have rights any more than a broadcast tower can listen and speak and vote. I spoke metaphorically, for if I had instead argued that, say, Google or Meta has a right to read and learn, that would have opened up a whole can of PR worms. The point is obvious, though: If AI creators would be required by law to license *everything* they use, that grants them lesser rights than media — including journalists, who, let’s be clear, read, learn from, and repurpose information from each other and from sources every day. 

I think there’s a difference in using content to train a model versus producing output. It’s one matter for large language models to be taught the relationship of, say, the words “White” and “House.” I say that is fair and transformative use. But it’s a fair discussion to separate out questions of proper acquisition and terms of use when an application quotes from copyrighted material from behind a paywall in its output. The magazine executive cleverly conflated training and output, saying *any* use required licensing and payment. I believe that sets a dangerous precedent for news media itself. 

If licensing and payment is required for all use of all content, then I say the doctrine of fair use could be eviscerated. The senators argued just the opposite, saying that if fair use is expanded, copyright becomes meaningless. We disagree. 

JCPA: The so-called Journalism Competition and Preservation Act is a darling of many members of the committee. Like Canada’s disastrous Bill C-18 and Australia’s corrupt News Media Bargaining Code — which the senators and the lobbyists think are wonderful — the JCPA would allow large news organizations (those that earn more than $100,000 a year, leaving out countless small, local enterprises) to sidestep antitrust and gang together and force platforms to “negotiate” for the right to link to their content. It’s legislated blackmail. I didn’t have the chance to say that. Instead, the lobbyists and legislators all agreed how much they love the bill and can’t wait to try again to pass it. 

Section 230: Members of the committee also want to pass legislation to exclude generative AI from the protections of Section 230, which enables public discourse online by protecting platforms from liability for what users say there while also allowing companies to moderate what is said. The chair said no witness in this series of hearings on AI has disagreed. I had the opportunity to say that he has found his first disagreement.

I always worry about attempts to slice away Section 230’s protections like a deli balogna. But more to the point, I tried to explain that there is nuance in deciding where liability should lie. In the beginning of print, printers were held liable — burned, beheaded, and behanded — for what came off their presses; then booksellers were responsible for what they sold; until ultimately authors were held responsible — which, some say, was the birth of the idea of authorship. 

When I attended a World Economic Forum AI governance summit, there was much discussion about these questions in relation to AI. Holding the models liable for everything that could be done with them would, in my view, be like blaming the printing press for what is put on and what comes off it. At the event, some said responsibility should lie at the application level. That could be true if, for example, Michael Cohen was misled by Google when it placed Bard next to search, letting him believe it would act like search and giving him bogus case citations instead. I would say that responsiblity generally lies with the user, the person who instructs the program to say something bad or who uses the program’s output without checking it, as Cohen did. There is nuance.

Deep fakery: There was also some discussion of the machine being used to fool people and whether, in the example used, Meta should be held responsible and expected to verify and take down a fake video of someone made with AI — or else be sued. As ever, I caution against legislating official truth.  

The most amusing moment in the hearing was when the senator from Tennessee complained that media are liberal and AI is liberal and for proof she said that if one asks ChatGPT to write a poem praising Donald Trump, it will refuse. But it would write a poem praising Joe Biden and she proceeded to read it to me. I said it was bad poetry. (BTW, she’s right: both ChatGPT and Bard won’t sing the praises of Trump but will say nice things about Biden. I’ll leave the discussion about so-called guardrails to another day.)

It was a fascinating experience. I was honored to be included. 

For the sake of contrast, in the morning before the hearing, I called Sven Størmer Thaulow, chief data and technology officer for Schibsted, the much-admired (and properly so) news and media company of Scandinavia. Last summer, Thaulow called for Norwegian media companies to contribute their content freely to make a Norwegian-language large language model. “The response,” the company said, “was overwhelmingly positive.” I wanted to hear more. 

Thaulow explained that they are examining the opportunities for a native-language LLM in two phases: first research, then commercialization. In the research phase now, working with universities, they want to see whether a native model beats an English-language adaptation, and in their benchmark tests, it does. As a media company, Schibsted has also experimented with using generative AI to allow readers to query its database of gadget reviews in conversation, rather than just searching — something I wish US news organizations would do: Instead of complaining about the technology, use it to explore new opportunities.

Media companies contributed their content to the research. A national organization made a blanket deal and individual companies were free to opt out. Norway being Norway — sane and smart — 90 percent of its books are already digitized and the project may test whether adding them will improve the model’s performance. If it does, they and government will deal with compensation then. 

All of this is before the commercial phase. When that comes, they will have to grapple with fair shares of value. 

How much more sensible this approach is to what we see in the US, where technology companies and media companies face off, with Capitol Hill as as their field of play, each side trying to play the refs there. The AI companies, to my mind, rushed their services to market without sufficient research about impact and harm, misleading users (like hapless Michael Cohen) about their capabilities. Media companies rushed their lobbyists to Congress to cash in the political capital earned through journalism to seek protectionism and favors from the politicians their journalists are supposed to cover, independently. Politicians use legislation to curry favor in turn with powerful and rich industries. 

Why can’t we be more like Norway?

The post In the echo chamber appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on January 11, 2024 05:25

January 9, 2024

Journalism and AI


Here are are my written remarks for a hearing on AI and the future of journalism for the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, on January 10, 2024.


I have been a journalist for fifty years and a journalism professor for the last eighteen.

History

I would like to begin with three lessons on the history of news and copyright, which I learned researching my book, The Gutenberg Parenthesis: The Age of Print and its Lessons for the Age of the Internet (Bloomsbury, 2023):

First, America’s 1790 Copyright Act covered only charts, maps, and books. The New York Times’ suit against OpenAI claims that, “Since our nation’s founding, strong copyright protection has empowered those who gather and report news to secure the fruits of their labor and investment.” In truth, newspapers were not covered in the statute until 1909 and even then, according to Will Slauter, author of Who Owns the News: A History of Copyright (Stanford, 2019), there was debate over whether to include news articles, for they were the products of the institution more than an author. 

Second, the Post Office Act of 1792 allowed newspapers to exchange copies for free, enabling journalists with the literal title of “scissors editor” to copy and reprint each others’ articles, with the explicit intent to create a network for news, and with it a nation. 

Third, exactly a century ago, when print media faced their first competitor — radio — newspapers were hostile in their reception. Publishers strong-armed broadcasters into signing the  1933 Biltmore Agreement by threatening not to print program listings. The agreement limited radio to two news updates a day, without advertising; required radio to buy their news from newspapers’ wire services; and even forbade on-air commentators from discussing any event until twelve hours afterwards — a so-called “hot news doctrine,” which the Associated Press has since tried to resurrect. Newspapers lobbied to keep radio reporters out of the Congressional press galleries. They also lobbied for radio to be regulated, carving an exception to the First Amendment’s protections of freedom of expression and the press. 

Publishers accused radio — just as they have since accused television and the internet and AI — of stealing “their” content, audience, and revenue, as if each had been granted them by royal privilege. In scholar Gwenyth Jackaway’s words, publishers “warned that the values of democracy and the survival of our political system” would be endangered by radio. That sounds much like the sacred rhetoric in The Times’ OpenAI suit: “Independent journalism is vital to our democracy. It is also increasingly rare and valuable.” 

To this day, journalists — whether on radio or at The New York Times — read, learn from, and repurpose facts and knowledge gained from the work of fellow journalists. Without that assured freedom, newspapers and news on television and radio and online could not function. The real question at hand is whether artificial intelligence should have the same right that journalists and we all have: the right to read, the right to learn, the right to use information once known. If it is deprived of such rights, what might we lose?

Opportunities

Rather than dwelling on a battle of old technology and titans versus new, I prefer to focus here on the good that might come from news collaborating with this new technology. 

First, though, a caveat: I argue it is irresponsible to use large language models where facts matter, for we know that LLMs have no sense of fact; they only predict words. News companies, including CNET, G/O Media, and Gannett, have misstepped, using the technology to manufacture articles at scale, strewn with errors. I covered the show-cause hearing for a New York attorney who (like President Trump’s former counsel, Michael Cohen) used an LLM to list case citations. Federal District Judge P. Kevin Castel made clear that the problem was not the technology but its misuse by humans. Lawyers and journalists alike must exercise caution in using generative AI to do their work. 

Having said that, AI presents many intriguing possibilities for news and media. For example:

AI has proven to be excellent at translation. News organizations could use it to present their news internationally.

Large language models are good at summarizing a limited corpus of text. This is what Google’s NotebookLM does, helping writers organize their research. 

AI can analyze more text than any one reporter. I brainstormed with an editor about having citizens record 100 school-board meetings so the technology could transcribe them and then answer questions about how many boards are discussing, say, banning books. 

I am fascinated with the idea that AI could extend literacy, helping people who are intimidated by writing tell and illustrate their own stories.

A task force of academics from the Modern Language Association concluded AI in the classroom could help students with word play, analyzing writing styles, overcoming writers’ block, and stimulating discussion. 

AI also enables anyone to write computer code. As an AI executive told me in a podcast about AI that I cohost, “English majors are taking the world back… The hottest programming language on planet Earth right now is English.” 

Because LLMs are in essence a concordance of all available language online, I hope to see scholars examine them to study society’s biases and clichés.

And I see opportunities for publishers to put large language models in front of their content to allow readers to enter into dialog with that content, asking their own questions and creating new subscription benefits. I know an entrepreneur who is building such a business. 

Note that in Norway, the country’s largest and most prestigious publisher, Schibsted, is leading the way to build a Norwegian-language large language model and is urging all publishers to contribute content. In the US, Aimee Rinehart, an executive student of mine at CUNY who works on AI at the Associated Press, is also studying the possibility of an LLM for the news industry. 

Risks

All these opportunities and more are put at risk if we fence off the open internet into private fortresses.

Common Crawl is a foundation that for sixteen years has archived the entire web: 250 billion pages, 10 petabytes of text made available to scholars for free, yielding 10,000 research papers. I am disturbed to learn that The New York Times has demanded that the entire history of its content — that which was freely available — be erased. Personally, when I learned that my books were included in the Books3 data set used to train large language models, I was delighted, for I write not only to make money but also to spread ideas. 

What happens to our information ecosystem when all authoritative news retreats behind paywalls, available only to privileged citizens and giant corporations able to pay for it? What happens to our democracy when all that is left out in public for free — to inform both citizens and machines — is propaganda, disinformation, conspiracies, spam, and lies? I well understand the economic plight of my industry, for I direct a Center for Entrepreneurial Journalism. But I also say we must have a discussion about journalism’s moral obligation to an informed society and about the right not only to speak but to learn.

Copyright

And we need to talk about reimaging copyright in this age of change, starting with a discussion about generative AI as fair and transformative use. When the Copyright Office sought opinions on artificial intelligence and copyright (Docket 2023-6), I responded with concern about an idea the Office raised of establishing compulsory licensing schemes for training data. Technology companies already offer simple opt-out mechanisms (see: robots.TXT).

Copyright at its origin in the Statute of Anne of 1710 was enacted not to protect creators, as is commonly asserted. Instead, it was passed at the demand of booksellers and publishers to establish a marketplace for creativity as a tradeable asset. Our concepts of creativity-as-content and content-as-property have their roots in copyright. 

Now along come machines — large language models and generative AI — that manufacture endless content. University of Maryland Professor Matthew Kirschenbaum warns of what he calls “the Textpocalypse.” Artificial intelligence commodifies the idea of content, even devalues it. I welcome this. For I hope it might drive journalists to understand that their value is not in manufacturing the commodity, content. Instead, they must see journalism as a service to help citizens inform public discourse and improve their communities. 

In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum on rethinking intellectual property and the support of creativity in the digital age. In the safe space of Davos, even media executives would concede that copyright is outmoded. Out of this work, I conceived of a framework I call “creditright,” which I’ve written is “the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward.” It is just one idea, intended to spark discussion. 

Publishers constantly try to extend copyright’s restrictions in their favor, arguing that platforms owe them the advertising revenue they lost when their customers fled for better, competitive deals online. This began in 2013 with German publishers lobbying for a Leistungsschutzrecht, or ancillary copyright, which inspired further protectionist legislation, including Spain’s link tax, articles 15 and 17 of the EU’s Copyright Directive, Australia’s News Media Bargaining Code, and most recently Canada’s Bill C-18, which requires large platforms — namely Google and Facebook — to negotiate with publishers for the right to link to their news. To gain an exemption from the law, Google agreed to pay about $75 million to publishers — generous, but hardly enough to save the industry. Meta decided instead to take down links to news rather than being forced to pay to link. That is Meta’s right under Canada’s Charter of Rights and Freedoms, for compelled speech is not free speech. 

In this process, lobbyists for Canada’s publishers insisted that their headlines were valuable while Meta’s links were not. The nonmarket intervention of C-18 sided with the publishers. But as it turned out, when those links disappeared, Facebook lost no traffic while publishers lost up to a third of theirs. The market spoke: Links are valuable. Legislation to restrict linking would break the internet for all. 

I fear that the proposed Journalism Competition and Preservation Act (JCPA) and the California Journalism Protection Act (CJPA) could have similar effect here. As a journalist, I must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism. The news should remain independent of — not beholden to — the public officials it covers. I worry that publishers will attempt to extend copyright to their benefit not only with search and social platforms but now with AI companies, disadvantaging new and small competitors in an act of regulatory capture. 

Support for innovation

The answer for both technology and journalism is to support innovation. That means enabling open-source development, encouraging both AI models and data — such as that offered by Common Crawl — to be shared freely. 

Rather than protecting the big, old newspaper chains — many of them now controlled by hedge funds, which will not invest or innovate in news — it is better to nurture new competition. Take, for example, the 450 members of the New Jersey News Commons, which I helped start a decade ago at Montclair State University; and the 475 members of the Local Independent Online News Publishers; the 425 members of the Institute for Nonprofit News; and the 4,000 members of the News Product Alliance, which I also helped start at CUNY. This is where innovation in news is occurring: bottom-up, grass-roots efforts emergent from communities. 

There are many movements to rebuild journalism. I helped develop one: a degree program called Engagement Journalism. Others include Solutions Journalism, Constructive Journalism, Reparative Journalism, Dialog Journalism, and Collaborative Journalism. What they share is an ethic of first listening to communities and their needs. 

In my upcoming book, The Web We Weave, I ask technologists, scholars, media, users, and governments to enter into covenants of mutual obligation for the future of the internet and, by extension, AI. 

There I propose that you, as government, promise first to protect the rights of speech and assembly made possible by the internet. Base decisions that affect internet rights on rational proof of harms, not protectionism for threatened industries and not media’s moral panic. Do not splinter the internet along national borders. And encourage and enable new competition and openness rather than entrenching incumbent interests through regulatory capture. 

In short, I seek a Hippocratic Oath for the internet: First, do no harm.

The post Journalism and AI appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2024 05:20

January 3, 2024

A journalism of belief and belonging


I increasingly come to see that we are not in a crisis of information and disinformation or even of misguided beliefs, but instead of belonging. I wonder how to reimagine journalism to address this plight.

Belonging is a good. The danger is in not belonging, and filling that void with malign substitutes for true community: joining a cult of personality or conspiracies, an insurrection, or some nihilistic, depraved perversion of a religion.

What role might journalism play to fill that void instead with conversation, connection, understanding, collaboration, enlightened values, and education?

Hannah Arendt teaches us that amid the thrall and threat of totalitarianism, some people belong to nothing, and so they are vulnerable to the lure of joining a noxious cause manufactured of fear. In The Gutenberg Parenthesis, I quote her:


“But totalitarian domination as a form of government is new in that it is not content with this isolation but destroys private life as well. It bases itself on loneliness, on the experience of not belonging to the world at all, which is among the most radical and desperate experiences of man.” For Arendt, to be public is to be whole, to be private is to be deprived; to be without both is to be uprooted, vulnerable, and alone.


Arendt found in Nazi and Soviet history “such unexpected and unpredicted phenomena as the radical loss of self-interest, the cynical or bored indifference in the face of death or other personal catastrophes, the passionate inclination toward the most abstract notions as guides for life, and the general contempt for even the most obvious rules of common sense.” The lessons for these populist times are undeniable as Trump’s base shows a loss of self-interest (what did he accomplish for them over the rich?), an indifference to death (defiantly burning masks at COVID superspreader rallies), a passionate inclination toward abstract notions (are abortion and guns truly more important to their everyday lives than jobs and health?), and contempt for common sense (see: science denial and conspiracy theories).


Later in my book, I call upon the theories of sociologist William Kornhauser, who contends that the solution to such alienated mass society is to support a pluralistic society of belonging, in which people connect with communities — they “possess multiple commitments to diverse and autonomous groups” — and are less vulnerable to, or at least feel a competitive tug away from, the siren call of populist movements. I write:

A pluralistic society is marked by belonging — to families, tribes (in the best and most supportive sense, which Sebastian Junger defines as “the people you feel compelled to share the last of your food with”), clubs, congregations, organizations, communities. A pluralistic society is more secure and less vulnerable to domination as a whole, as a mass. In such associations we do not give up our individuality; we gain individual identity by connecting, gathering, organizing, and acting with others who share our interests, needs, goals, desires, or circumstances. When that occurs, in Kornhauser’s view, elites become accessible as “competition among independent groups opens many channels of communication and power.” Then, too, “the autonomous man respects himself as an individual, experiencing himself as the bearer of his own power and as having the capacity to determine his life and to affect the lives of his fellows.” In short, a pluralistic society is a diverse society.

Of course, it is diversity that most threatens the autocrats, populists, racists, and fascists who in turn imperil our nation and democracy around the world. That is why they condemn “identity politics.” The internet, I theorize, enabled voices too long not represented in so-called mainstream — i.e., old, white — mass media to at last be heard. That is what the would-be tyrants and cultists use to stir fear and recruit their rudderless hordes, preaching that the Others — Blacks, Hispanics, LGBTQ people, immigrants, “woke mobs,” and lately trans people — will come steal their jobs, homes, history, security, society, and even children.

Journalism brings information to the fight for their very souls. We stand outside reactionary revival tents with slips of paper bearing facts, thinking that can compete with the heart-thumping hymns of fear within.

In 2022 in Paris, a group of scholars gathered at the International Communication Association for a preconference that asked, “What comes after disinformation studies?” In a paper reporting on the discussion, Théophile Lenoir and Chris Anderson conclude: “Fact-checking our way out of politics will not work.”

Journalists want to believe that we are in a crisis of disinformation because they think the cure must be what they offer: information. The mania around disinformation after 2016 led to what Joe Bernstein in Harper’s calls Big Disinfo, a veritable industry devoted to dis-dis-information. I was part of that effort, having raised money after 2016 to support such projects. I’m certainly not opposed to reporting information and checking facts! But we need to concede that these are insufficient ends.

If the problem is not disinformation, then it must be belief, we say, pointing to opinion polls in which shocking numbers of citizens say they ascribe to insane ideas and conspiracy theories. Regarding such polls, I will forever return to the lessons of the late James Carey: “Public life started to evaporate with the emergence of the public opinion industry and the apparatus of polling. Polling … was an attempt to simulate public opinion in order to prevent an authentic public opinion from forming.”

Polls are fatally and fundamentally flawed because they reflect the biases of the pollsters, who insist on sorting us into their buckets, leaving no room for nuance or context. Worse than that, polls have become a mechanism for signaling belonging in some rebellious, defiant cause. Writes Reece Peck, another scholar at the ICA Paris preconference, “Political scientists have come to understand that voting is less a cool-headed deliberation on how specific policies help or hurt the voter’s material economic interest and more an occasion for expressing the voter’s cultural attachments and group loyalties.” Fringe opinions are a means for these citizens to tell pollsters, media, and authority: ‘You can’t sort us. We’ll sort ourselves.’ As researchers Michael Bang Petersen, Mathias Osmundsen, and Kevin Arceneaux have found, people who circulate hostile political information do so out of a “Need for Chaos,” a desire to “‘burn down’ the entire political order in the hope they gain status in the process.” In the hope, that is, that they will find a place to belong in their posse, their institutional insurrection. See again: Arendt.

I believe there is only one true hope to cure vulnerability to such performative belief: education. By that I do not mean media- or news-literacy, the hubristic assertion that if only people understood how journalism works and consumed its products, all would be well. I mean education, period: in the humanities, the social sciences, and science. As I write in my upcoming book, The Web We Weave, I taught in a public university because I believe education is our best hope. But universities — particularly their humanities departments — are being starved of resources and attacked by populist, right-wing forces that view education as their enemy because it is through education that they lose voters and power. This is where our underlying crisis and solution lie.

What can journalism do? I am not sure.

In any discussion of the crisis in democracy, someone will pipe up with banalities about the internet segregating us in filter bubbles and echo chambers. But research by Petersen and Axel Bruns shows that — as Petersen says — “the biggest echo chamber that we all live in is the one we live in in our everyday lives,” in the towns, jobs, and congregations we seek out to be around people like us. Journalist Bill Bishop said it well in the subtitle of his 2008 book, The Big Sort: “The clustering of like-minded American is tearing us apart.” The internet doesn’t cause filter bubbles, it punctures them, confronting people with those they are told to fear. The internet does not cause division. It exposes it.

Thus I have argued that one mission for journalism (and, for that matter, social networks) should be to make strangers less strange. At the Tow-Knight Center, I funded research to that end by Caroline Murray and Talia Stroud, who found 25 inspiring projects in newsrooms attempting to do just that; look at their list. I find that work heartening, yet still insufficient.

Journalism is flawed at its core. It is built to seek out, highlight, and exploit — and cause — conflict. Political journalism is engineered to predict, which does nothing to inform the electorate. Instead, in the words of Jay Rosen, it should focus on what is at stake in the choices citizens make. Journalism has done tremendous harm to countless communities that have never trusted its institutions. Journalism — just like the internet companies it criticizes — is built on the economics of attention.

I do not, of course, reject all of journalism. Yes, I criticize The Times and The Post because they have been our biggest and best and we need them to be better. I also praise excellent reporting there and support it with my subscriptions. I think it is important to understand our history sans the sacred rhetoric publishers use to lobby politicians and courts for protection against new competitors, from radio to television to the internet to AI. James Gordon Bennett, the early newspaper titan said to be the father of modern journalism — thus mass media — once said to an upstart in the field: “Young man, ‘to instruct the people,’ as you say, is not the mission of journalism. That mission, if journalism has any, is to startle or amuse.” There are our roots in mass media. Hear Carl Lindstrom writing in The Fading American Newspaper:

In its hunger for circulation it has sought status as a mass medium to the point where it is a hollow attempt to be all things to all men. It has scorned competition as an evil, and cultivated monopoly as a virtue. While claiming a holy mission with constitutional protection, it has left great vacuums of journalistic obligation into which competiting mediums have moved with impunity and public acceptance. Today journalism is on the move at an ever-accelerating rate with the daily press showing no apparent concern. This indifference is in accord with its incapacity for relentless self-examination. In this vacant place self-delusion has built itself a nest.

He wrote that in 1960.

There are movements to address the mission void in present-day journalism. I helped start one in Engagement Journalism, with my colleague Carrie Brown. There is Solutions JournalismCollaborative JournalismConstructive JournalismReparative JournalismDialog JournalismDeliberative Journalism … and others. I would like to bring these various ’ives together in a room to see what links them. I think it will be this: They start with listening.

Journalism is terrible at listening. We train reporters to hit the streets with premade narratives and predictions, looking for quotes to fulfill them. In Engagement Journalism, we teach journalists instead to hear the communities they serve. That does not mean we must listen to every cultist’s crazy theories and fears concocted for media attention. Journalists give them plenty of oxygen already. No, I mean that we need to allow people to be heard regarding their real lives and actual circumstances and concerns. That is a necessary start.

How do we then reimagine journalism built around helping people understand that they can belong to positive communities of understanding and empathy, they can build bridges to other communities through listening and learning, they can find fulfillment in their own identities without excluding or denigrating the identities of others?

A few years ago, I participated in valuable diversity training. In one exercise, our trainer told each of us to reflect on our own cultures. I demurred, saying that I had no culture as I am of boring, generic, white-bread, American, suburban stock. She told me I was wrong. Upon reflection, I saw that she was right. She forced me to recognize the power of the cultural default. I’ve learned that lesson, too, from André Brock, whom I quote in The Gutenberg Parenthesis:

In Distributed Blackness, his trenchant analysis of African American cybercultures … Georgia Tech Professor André Brock Jr. sought to understand Black Twitter on its own terms, not in relation to mass and white media, not in the context of aiming to be heard there. “My claim is ecological: Black folk have made the internet a ‘Black space’ whose contours have become visible through sociality and distributed digital practice while also decentering whiteness as the default internet identity.” That is to say that it is necessary to acknowledge the essential whiteness of mass media as well as the internet. “Despite protestations about color-blindness or neutrality,” Brock wrote, “the internet should be understood as an enactment of whiteness through the interpretive flexibility of whiteness as information. By this, I mean that white folks’ communications, letters, and works of art are rarely understood as white; instead, they become universal and are understood as ‘communication,’ ‘literature,’ and ‘art.’”

Brock helped me see where journalism is “whiteness as information.” So have Wesley Lowery and Lewis Raven Wallace in their criticism of journalistic objectivity (works I assigned and taught every year).

Brock also made me see how the internet has helped me belong. I long was a loner; journalists fancy themselves that: separate, apart (and let’s admit it, above). I live in a town disconnected from many of my neighbors. But on the internet, I have found myself connected with many communities.

Every year in the Engagement Journalism class I had the privilege of teaching with Carrie Brown, we would ask students what communities they belong to. The answers inevitably began with the obvious: “I’m a student.” “I live in Brooklyn.” But then someone might say, “I struggle with mental health issues.” A few students later in the circle, another students would echo that. Thus a connection is made, empathy established, a community enabled. Not all communities are bounded by geography; online, they might exist in any definition, anywhere.

Such conversation and connection can occur only in an environment of trust, but today we live in an environment of distrust — and that is the fault, in great measure, of media and politics manufacturing disconnection and fear. That is what journalism must fight against: a darkness not of information but of the soul. I return to Lenoir and Anderson in Paris:

Technical solutions to political problems are bound to fail. Historical, structural, and political inequality — and especially race, ethnicity, and social difference — needs to be at the forefront of our understanding of politics and, indeed, disinformation. The challenge for researchers, and our field broadly, is to engage in politics by generating ideas and crafting narratives that make people want to live in a more just world, not just a more truthful one.

The same should be said of journalism. How might we do that?

Journalists might see ourselves as conveners of conversation (see, for example, Spaceship Media).

We might see ourselves as educators, defenders of — yes, advocates for — enlightened values of reason, liberty, equality, tolerance, and progress. It is not enough to expose inequality, we must defend equality.

We might see it as our task to build bridges among communities — to make strangers less strange, to help people escape the filter bubbles in their real lives.

We might understand the imperative to fight — not neutrally amplify — the dark forces of hate, fear, and fascism.

We must pay reparations to the communities our institutions have damaged by finally assuring that their stories are told — by themselves — and heard.

We could reject the economics of attention and scale of mass media and rebuild journalism at human scale, valuing our work not through our metrics of audience but instead as the public values us.

As I leave my last job and the last year, I am reflecting on where to turn my attention next. I spent a dozen years at the end of my time in the industry working to make journalism digital, a task that should be self-evident but even so, is far from done. I spent eighteen years in a university exploring new business models for news, though I fear that trying to save established journalism ends in protectionism. My proudest work has been teaching and learning Engagement Journalism and it is there — in listening to communities — where I wish to devote myself.

I also believe it is critical that we understand journalism now in the context of a connected world and call upon other disciplines — history, ethics, psychology, community studies, anthropology, sociology — to understand the internet not as a technology but as a human network. That is the subject of my next book. That is what I have been calling Internet Studies: examining how we interact now and what reimagined and reformed institutions we need to help us do that better. Somewhere in there, I believe, is the essence of a new journalism, a journalism of education, a journalism of belonging.

The post A journalism of belief and belonging appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on January 03, 2024 08:40

November 19, 2023

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);scale (that is, enabling people and organizations to take on certain tasks much more efficiently); andraise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

bring economic hardship; enable evil at scale (from exploding disinformation to inventing new diseases); andfor some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired-and-possibly-resurrected Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

The post Artificial general bullshit appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2023 07:00

September 22, 2023

Gibberish from the machine

I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish. 

We have Gutenberg to blame. It is because of his invention, print, that society came to think of public discourse, creativity, and news as “content,” a commodity to fill the products we call publications or lately websites. Journalists believe that their value resides primarily in making content. To fill the internet’s insatiable maw, reporters at some online sites are given content quotas, and their news organizations no longer appoint editors-in-chief but instead “chief content officers.” For the record, Stern still has actual editors, many of them.

And now here comes a machine — generative artificial intelligence or large language models (LLMs), such as ChatGPT — that can create no end of content: text that sounds just like us because it has been trained on all our words. An LLM maps the trillions of relationships among billions of words, turning them and their connections into numbers a computer can calculate. LLMs have no understanding of the words, no conception of truth. They are programmed only to predict the next most likely word to occur in a sentence.

A New York lawyer named Steven Schwartz had to learn his lesson about ChatGPT’s factual fallibility the hard way. In a now-infamous case, attorney Schwartz asked ChatGPT for precedents in a lawsuit involving an errant airline snack cart and his client’s allegedly injured knee. Schwartz needed to find cases relating to highly technical issues of international treaties and bankruptcy. ChatGPT dutifully delivered more than a half-dozen citations.

As soon as Schwartz’s firm filed the resulting legal brief in federal court, opposing counsel said they could not find the cases, and the judge, P. Kevin Castel, directed the lawyers to produce them. Schwartz returned to ChatGPT. The machine is programmed to tell us what we want to hear, so when Schwartz asked whether the cases were real, ChatGPT said they were. Schwartz then asked ChatGPT to show him the complete cases; it did, and he sent them to the court. The judge called them “gibberish” and ordered Schwartz and his colleagues into court to explain why they should not be sanctioned. I was there, along with many more journalists, to witness the humbling of the attorneys at the hands of technology and the media.

“The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer told the judge. “The court has done its job warning the public of these risks.” Judge Castel interrupted: “I did not set out to do that.” The problem here was not with the technology but with the lawyers who used it, who failed to heed warnings about the dubious citations, who failed to use other tools — even Google — to verify them, and who failed to serve their clients. The lawyers’ lawyer said Schwartz “was playing with live ammo. He didn’t know because technology lied to him.”

But ChatGPT did not lie because, again, it has no conception of truth. Nor did it “hallucinate,” in the description of its creators. It simply predicted strings of words, which sounded right but were not. The judge fined the lawyers $5,000 each and acknowledged that they had suffered humiliation enough in news coverage of their predicament.

Herein lies a cautionary tale for news organizations that are rushing to have large language models write stories — because they want to be cool and trendy, or save work, or perhaps to eliminate jobs, and manufacture ever more content. The news companies CNET and G/O Media have gotten into hot water for using AI to produce content that turned out to be less than factual. America’s largest newspaper chain, Gannett, just turned off artificial intelligence that was producing embarrassing sports stories that would call a football game “a close encounter of the athletic kind.” I have heard online editors plead that they are in a war to produce more and more content to attract more likes and clicks so they may earn more digital advertising pennies. Their problem is that they think their mission is only to make content.

My advice to editors and publishers is to steer clear of large language models for writing the news, except in well-proven use cases, such as turning highly structured financial reports into basic news stories, which must be checked before release. I would give the same advice to Microsoft and Google about connecting LLMs with their search engines. Fact-free gibberish coming out of the machine could ruin the authority and credibility of both news and technology companies — and affect the reputation of artificial intelligence overall.

There are good uses for AI. I benefit from it every day in, for example, Google Translate, Maps, Assistant, and autocomplete. As for large language models, they could be useful to augment — not replace — journalists’ work. I recently tested a new Google tool called NotebookLM, which can take a folder filled with a journalist’s research and summarize it, organize it, and allow the writer to ask questions of it. LLMs could also be used in, for example, language education, where what matters is fluency, not facts. My international students use these programs to smooth out their English for school and work. I even believe LLMs could be used to extend literacy, to help people who are intimidated by writing to communicate more effectively and tell their own stories.

Ah, but therein lies the rub for writers, like me. We believe we are special, that we hold a skill — a talent for writing — that few others can boast. We are storytellers and wield the power to tell others’ tales, to decide what tales are told, who shall be heard in them, and how they will begin and neatly end. We think that gives us the ability to explain the world in what journalists like to call the first draft of history — the news.

Now writers and journalists see both the internet and AI as competition. The internet enables the silent mass of citizens who were not heard in media to at last have their say — and to create a lot of content. And by producing credible prose in seconds, AI devalues writing and robs writers of their special status.

This is one reason why I believe we see hostile coverage of technology in media these days. News organizations and their proprietors claim that Google, Facebook, et al steal away audience, attention, and advertising money (as if God granted publishers those assets in perpetuity). Journalists are engaged in their latest moral panic — another in a long line of panics over movies, television, comic books, rock lyrics, and video games. They warn about the dangers of the internet, social media, our phones, and now AI, claiming that these technologies will make us stupid, addict us, take away our jobs, and destroy democracy under a deluge of disinformation.

They should calm down. A 2020 study found that in the US no age group “spent more than an average of a minute a day engaging with fake news, nor did it occupy more than 0.2% of their overall media consumption.” The issue for democracy isn’t so much disinformation but the willingness — the eagerness — of some citizens to believe lies that stoke their own fears and hatreds. Journalism should be reporting on the roots of bigotry and extremism rather than simplistically blaming technology.

In my book, The Gutenberg Parenthesis, I track society’s entry into the age of print as we now leave it for the digital age that follows. Print’s development as an institution of authority took time. Not until fifty years after Gutenberg’s Bible, around 1500, did the book take the shape we know today, with titles, title pages, and page numbers. It took another century, a few years either side of 1600, before the technology and its technologists — printers — faded into the background, making way for tremendous innovation with print: the birth of the modern novel with Cervantes, the essay with Montaigne, and the newspaper. A business model for print did not arrive until one century more, in 1710, with the advent of copyright. Come the 1800s, the technology of print — which had hardly changed since Gutenberg — evolved at last with the arrival of steam-powered presses and typesetting machines, leading to the birth of mass media. The twentieth century brought print’s first competitors, radio and television. And here we are today, just over a quarter century past the introduction of the commercial web browser. This is to say that we are likely at just the beginning of a long transition into the digital age. It is only 1480 in Gutenberg years.

In the beginning, rumor was trusted more than print because any anonymous printer could produce a book or pamphlet — just as anyone today can make a web site or tweet. In 1470 — only fifteen years after Gutenberg’s Bible came off the press — Latin scholar Niccolò Perotti made what is said to be the first call for censorship of print. Offended by a bad translation of Pliny, he wrote to the Pope demanding that a censor be assigned to approve all text before it came off the press. As I thought about this, I realized Perroti was not seeking censorship. Instead, he was anticipating the establishment of the institutions of editing and publishing, which would assure quality and authority in print for centuries.

Like Perotti in his day, media and politicians today demand that something must be done about harmful content online. Governments — like editors and publishers — cannot cope with the scale of speech now, so they deputize platforms to police and censor all that is said online. It is an impossible task.

Journalists must be careful using AI to produce the news. At the same time, there is a danger in demonizing the technology. In the best case, the rise of AI might force journalists to examine their role in society, to ask how they improve public discourse. The internet provides them with many new ways to connect with communities, to build relationships of trust and authority with them, to listen to their needs, to discover and share voices too long not heard in the public sphere, to expand the work of journalism past publishing to the wider canvas of the internet.

Journalists think their content is what makes them valuable, and so publishers and their lawyers and lobbyists are threatening to sue AI companies, dreaming of huge payments for machines that read their content. That is no strategy for the future of journalism. Neither is Axel Springer’s plan to replace journalists in content factories with AI. That is not where the value of journalism lies. It lies with reporting on and serving communities. Like Nicollò Perotti, we should anticipate the creation of new services to help internet users cope with the abundance of content today, to verify the truth and falsity of what we see online, to assess authority, to discover more diverse voices, to nurture new talent, to recommend content that is worth our time and attention. Could such a service be the basis of a new journalism for the online, AI age?

The post Gibberish from the machine appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on September 22, 2023 05:40

September 11, 2023

A generation later: What have we learned?

The date sneaked up on me this year, attacking from behind. Every year on 9/11 I reflect, grateful that I survived the attack. This year, though, I find myself angry. Some of that might be my own loss: my father to COVID this year; my imminent unemployment.

But I am angry on this 22nd anniversary at what has fallen since: at the authoritarianism that overtook this country and threatens the world, at racism and bigotry set loose, at the pandemic killing still, at my own field — journalism — failing to meet these challenges. 

A generation has passed since 9/11/01 and what have we learned? Authoritarians attacked us that day and now authoritarians attack from within. My failing field — journalism — elevates the evil as if it is merely another side in a spectator sport.

Since 9/11/01, our only popularly elected presidents succeeded in strengthening the nation. Under Biden, the economy & nation are strong. But journalism fails at informing the public and wants to make jet lag an election issue while normalizing the fascism in the house. WTF. 

It was on 9/11/01, on my way to work through the World Trade Center, that I decided it was time to leave my job. I would teach. Now I leave that role and I ask what I have accomplished. I pray my students will turn around journalism, for we, their elders, have failed. 

I am, of course, still grateful to have survived 9/11/01. The images and lessons of that day are seared into my soul and will never leave me; they define me. I regret that the spirit in the nation was perverted into war in Iraq. I worry about the state of politics everywhere. 

But on this day I will try to rise above my anger and remember the names of the souls lost and the faces of the selfless first responders I saw rushing toward danger and mercy. This is a day for memorial and gratitude to them.

The only suitable memorial to those lost on 9/11/01 is to recognize the evil that took them and for our institutions — government, politics, journalism, education — to protect present and future generations from further fascism.

The post A generation later: What have we learned? appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on September 11, 2023 06:07

September 6, 2023

Moving on

I have news: I am leaving CUNY’s Newmark Graduate School of Journalism at the end of this term. Technically I’m retiring, though if you know me you know I will never retire. I’m looking at some things to do next and I’m open to others. More on that later. Now, I want to recollect — brag — about my time there.

Eighteen years ago, in 2005, I was the first professor hired at our new school. The New York Times was dubious:

For some old-school journalists, blogging is the worst thing to hit the print medium since, well, journalism school. They may want to avert their eyes today, when Stephen B. Shepard, dean of the new Graduate School of Journalism at the City University of New York, is to name Jeff Jarvis director of the new-media program and associate professor.

On my first day on the job, after attending my first faculty meeting, I quit. I had suggested that faculty needed to learn the new tools of online and digital journalism and some of them jumped down my throat: How dare I tell them what to learn? This festered in me, as things do, and I emailed Steve Shepard and Associate Dean Judith Watson saying that we had made a mistake. I’d already quit my job as president of Advance.net. But, oh well. 

Steve emailed me asking WTF I was doing. That curriculum committee was a temporary body. They weren’t on the faculty of the school. I was. Over lunch, Steve and Judy salved my neuroses and said I could teach that entrepreneurial journalism thing the committee had killed. I stayed. 

Steve took a flier on me. It wasn’t just that I was a blogger and a neurotic but I had only a bachelor’s degree. I’ve always said that I am a poseur in the academy, a fake academic. Nonetheless, I’ve had the privilege of starting three master’s degrees at the school. (Recently, visiting with actual academics at the University of St Andrews, I said I had started three degrees and they looked at me cock-eyed and asked why I hadn’t finished any of them.) 

With Steve, I took our entrepreneurial class and turned it into the nation’s first Advanced Certificate and M.A. in Entrepreneurial Journalism, to prepare journalists to be responsible stewards of our field. The program has been run brilliantly ever since by my colleague Jeremy Caplan, a most generous educator. It has evolved into an online program for independent journalists. 

I’m grateful that our next dean, Sarah Bartlett, also took a flier on involving me in her strategy for growth and we built much together. This week, I’m teaching the fourth cohort in our News Innovation and Leadership executive program. I’d long seen the need for such a degree, so news people would not be corrupted getting MBAs, and so our school, dedicated to diversity, would have an impact not just at the entry level in newsrooms but also on their management. I had to wait to recruit the one person who could build this program, Anita Zielina, and she has done a phenomenal job; she is the leaders’ leader. The program is in great hands with her successor, Niketa Patel. (And I plan to stick around to teach with them in this program after I leave.) 

My proudest accomplishment at the school and indeed in my career has been creating the Engagement Journalism degree in 2014, inspired when Sarah read what I’d written about building relationships with communities as the proper basis of journalism. She asked whether we taught that at the school. Not really, I said. How about a new degree? Cool, I said. We scribbled curricula on napkins. By the end of that week in California we had seed funding from Reid Hoffman, and by that fall we had students in class. I had the great good fortune of hiring, once again, the one person who could build the program, Dr. Carrie Brown, with whom I’ve had the privilege of teaching and learning ever since. She is a visionary in journalism. 

The program is, I’m sad to say, on pause right now. But after having just attended preconferences at AEJMC and ONA on Engagement Journalism, I am gratified to report that the movement is spreading widely. Each gathering was filled with journalists, educators, and community leaders dedicated to centering our work on communities, to building trust through listening and collaboration, to valuing the experience-as-expertise of the public over the tired doctrine of journalistic objectivity, and to repairing the damage journalism has done. I have told our Engagement students that they would be Trojan horses in newsrooms and they have been just that, getting important jobs and reimagining and rebuilding journalism from within.

I am proud of those graduates as I am of those from the executive and Entrepreneurial programs. Since arriving at the school, I have said to each class that I am too old to change journalism. Instead, I would watch and try to help students take on that responsibility. It is wonderful to witness their success. Of course, there is much yet to do. 

Lately, I have turned my attention to internet studies and the wider canvas on which journalism should work in our connected world. What interests me most is bringing the humanities into the discussion of this most human enterprise, which has for too long been dominated (as print was in its first half-century) by the technologists. This is work I hope to continue. 

I love starting things. In my career, I have had the honor of founding Entertainment Weekly at Time Inc., and lots of web sites at Advance. Here I had the great opportunity to help start a school. At the Tow-Knight Center, which I direct, we started communities of practice for new roles in newsrooms; two of these organizations have flown the nest to become independent and sustainable: the News Product Alliance and the Lenfest Institute’s Audience Community of Practice. I’m also proud to have had a small role in helping at the start of Montclair State’s Center for Cooperative Media, which is doing amazing work in Engagement under Stefanie Murray and our alum, Joe Amditis. Those are activities I expected from our Center.

What I had not imagined was that the Center would become an incubator for new degrees. That was made possible by funders. I also never thought that I’d be in the business of fundraising. But without funders’ support, none of these programs would have been born. 

Sarah Bartlett taught me much about raising money, because she’s so good at it. I haven’t heard her say it just this way, but from her I learned that fundraising is about friendship. I am grateful for the friendship of so many supporters of the school and of my work there. 

My friend Leonard Tow challenged Steve and me — with a $3 million challenge grant — when we said we wanted to start a center dedicated to exploring sustainability for news. Emily Tow, who heads the family’s foundation, took us under her wise wing and patiently taught us how to tell our story. It worked. Our friend Alberto Ibargüen, CEO of the Knight Foundation, asked Steve what would make his new school stand apart. Steve said entrepreneurial journalism. Alberto matched the Tows’ grant and the Tow-Knight Center was born. Knight’s Eric Newton was the one who insisted we should make our Entrepreneurial Journalism program a degree and later Jennifer Preston supported our work there. 

As time went on, Len Tow also endowed the Tow Chair in Journalism Innovation, which I am honored to hold. 

When my long-time friend Craig Newmark decided to make it his life’s mission to support journalism (and veterans and cybersecurity and women in tech and pigeons), he generously told me to bring him an idea and thus was born Tow-Knight’s News Integrity Initiative, also supported by my Facebook friends (literally), Áine Kerr, Meredith Carden, and Campbell Brown. Next, Craig most generously endowed the school that now proudly carries his name. His endowment has been a life-saver in the crisis years of the pandemic. His friendship, support, and guidance are invaluable to me. And we love nerding about gadgets. 

I have more friends to thank for their support: John Bracken, way back when he was at the MacArthur Foundation, gave me my first grant to support Entrepreneurial Journalism students’ enterprises. Ford, Carnegie, McCormick, and others contributed to what has added up to — I’m amazed to say — about $53 million in support in which I had a hand. 

And I am grateful for the latest support of the Center, thanks to my friend Richard Gingras of Google. (By way of disclosure, I’ll add that I have not been paid by any technology company.)

I must give my thanks to Hal Straus and Peter Hauck, who worked alongside me — that is to say, tolerated my every inefficiency and eccentricity — managing Tow-Knight, as well as other colleagues (especially Jesenia De Moya Correa), who made possible the convenings the Center brought to the school. The latest were a Black Twitter Summit convened by Meredith Clark, André Brock, Charlton McIlwain, and Johnathan Flowers, and a gathering of internet researchers led by Siva Vaidhyanthan. I have learned so much from such scholars, journalists, technologists, and business and community leaders who have lent their time to the school and the Center.

Finally, I’d like to thank my friend Jay Rosen of NYU, who from the start has taught me much about teaching and scholarship. 

Having subjected you to my Oscar speech, I won’t burden you now with valedictory thoughts on the fate of journalism. That, too, awaits another day. But there’s one more thing I’m grateful for: the opportunity teaching has given me to research and write. I didn’t just blog, to the consternation of our neighbors at The Times, but also got to write books: What Would Google Do? (Harper 2009), Public Parts (Simon & Schuster 2011), and Geeks Bearing Gifts: Imagining New Futures for News (published by the CUNY Journalism Press in 2014). 

I spent the last decade digging into and geeking out about Gutenberg and the vast sweep of media history, leading to The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet, recently published by Bloomsbury Academic. Here is its dedication:

I have another brief work of media history, Magazine, in Bloomsbury’s Object Lessons series, coming out this fall (in which I finally tell my story of the founding of Entertainment Weekly). I have a future book about the internet and media’s moral panic over it — and AI ; I just submitted the manuscript to Basic Books. And I have another few books I want to work on after that. So, yes, I’ll be busy. 

I do hope to continue teaching — perhaps internet studies or even book and media history — and to get back out speaking and consulting and helping start more things. I’d like a fellowship and would welcome the chance to return to serving on boards. Feel free to ping me if you have thoughts. 

I am grateful for my time at CUNY and the privilege to teach there and wish nothing but the best future for the Newmark School.

The post Moving on appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2023 07:11

September 2, 2023

Copyright and AI and journalism

The US Copyright Office just put out a  call for comment  on copyright and artificial intelligence. It is a thoughtful document based on listening sessions already held, with thirty-four questions on rights regarding inclusion in learning sets, transparency, the copyrightability of generative AI’s output, and use of likeness. Some of the questions — for example, on whether legislation should require assent or licensing — frighten me, for reasons I set forth in my comments, which I offer to the Office in the context of journalism and its history:

I am a journalist and journalism professor at the City University of New York. I write — speaking for myself — in reply to the Copyright Office’s queries regarding AI, to bring one perspective from my field, as well as the context of history. I will warn that precedents set in regulating this technology could impinge on freedom of expression and quality of information for all. I also will share a proposal for an updated framework for copyright that I call creditright, which I developed in a project with the World Economic Forum at Davos.

First, some context from present practice and history in journalism. It is ironic that newspaper publishers would decry AI reading and learning from their text when journalists themselves read, learn from, rewrite, and repurpose each others’ work in their publications every day. They do the same with sources and experts, without remuneration and often without credit. This is the time-honored tradition in the field.

The 1792 US Post Office Act provided for newspapers to send copies to each other for free for the express purpose of allowing them to copy each other, creating a de facto network of news in the new nation. In fact, many newspapers employed “scissors editors” — their actual job title — to cut out stories to reprint. As I recount in my book, The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet (Bloomsbury Academic, 2023, 217), the only thing that would irritate publishers was if they were not credited.

As the Office well knows, the Copyright Act of 1790 covered only books, charts, and maps, and not newspapers or magazines. Not until 1909 did copyright law include newspapers, but even then, according to Will Slauter in Who Owns the News?: A History of Copyright (Stanford University Press, 2019), there was debate as to whether news articles, as opposed to literary features, were to be protected, for they were often anonymous, the product of business interest more than authorship. Thus the definition of authorship — whether by person, publication, or now machine — remains unsettled.

As to Question 1, regarding the benefits and risks of this technology (in the context of news), I have warned editors away from using generative AI to produce news stories. I covered the show-cause hearing for the attorney who infamously asked ChatGPT for citations for a federal court filing. I use that tale as an object lesson for news organizations (and search platforms) to keep large language models far away from any use involving the expectation of facts and credibility. However, I do see many uses for AI in journalism and I worry that the larger technological field of artificial intelligence and machine learning could be swept up in regulation because of the misuse, misrepresentation, factual fallibility, and falling reputation of generative AI specifically.

AI is invaluable in translation, allowing both journalists and users to read news around the world. I have tested Google’s upcoming product, NotebookLM; augmentative tools such as this, used to summarize and organize a writer’s research, could be quite useful in improving journalists’ work. In discussing the tool with the project’s editorial director, author Steven Johnson, we saw another powerful use and possible business model for news: allowing readers to query and enter into dialogue with a publisher’s content. Finally, I have speculated that generative AI could extend literacy, helping those who are intimidated by the act of writing to help tell — and illustrate — their own stories.

In reviewing media coverage of AI, I ask you to keep in mind that journalists and publishers see the internet and now artificial intelligence as competition. In an upcoming book, I assert that media are embroiled in a full-fledged moral panic over these technologies. The arrival of a machine that can produce no end of fluent prose commodifies the content media produce and robs writers of our special status. This is why I teach that journalists must understand that their value is not resident in the commodity they produce, content, but instead in qualities of authority, credibility, independence, service, and empathy.

As for Question 8 on fair use, I am no lawyer, but it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted for the machine as an agent for users, precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.

I see another potential use for large language models: to provide researchers and scholars with a window on the presumptions, biases, myths, and misapprehensions reflected in the relationships of all the words analyzed by them — the words of those who had the power and privilege of publishing them. To restrict access skews that vision and potentially harms scholarly uses that have not yet been imagined.

The speculation in Question 9, about requiring affirmative permission for any copyrighted material to be used in training AI models, and in Question 10, regarding collective management organizations or legislatively establishing a compulsory licensing scheme, frightens me. AI companies already offer a voluntary opt-out mechanism, in the model of robots.txt. As media report, many news organizations are availing themselves of that option. To legally require opt-in or licensing sets up unimaginable complications.

Such complication raises the barrier to entry for new and open-source competitors and the spectre of regulatory capture — as does discussion in the EU of restricting open-source AI models (Question 25.1). The best response to the rising power of the already-huge incumbent companies involved in AI is to open the door — not close it — to new competition and open development.

As for Questions 18–21 on copyrightability, I would suggest a different framework for considering both the input and output of generative AI: as an intellectual, cultural, and informational commons, whose use and benefits we cannot not predict. Shouldn’t policy encourage at least a period of development, research, and experimentation?

Finally, permit me to propose another framework for consideration of copyright in this new age in which connected technologies enable collaborative creation and communal distribution. In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum in Davos on rethinking intellectual property and the support of creativity in the digital age. In the safe space of the mountains, even entertainment executives would concede that copyright law could be considered outmoded and is due for reconsideration. The WEF report is available here.

Out of that work, I conceived of a framework I call “creditright,” which I write about in Geeks Bearing Gifts (CUNY Journalism Press, 2014) and in The Gutenberg Parenthesis (221–2): “This is not the right to copy text but the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward. I didn’t mention blockchain; but the technology and its automated contracts could be useful to record credit and trigger rewards.” I do not pretend that this is a fully thought-through solution, only one idea to spark discussion on alternatives for copyright.

The idea of creditright has some bearing on your Questions 15–17 on transparency and recordkeeping — what might ledgers of credit in creation look like? — though I am trying to make a larger argument about the underpinnings of copyright. As I have come to learn, 1710’s Statute of Anne was not formulated at the urging of — or to protect the rights of — authors, so much as it was in response to the demands of publishers and booksellers, to create a marketplace for creativity as a tradable asset. Said historian Peter Baldwin in The Copyright Wars: Three Centuries of Trans-Atlantic Battle (Princeton University Press, 2016, 53–6): “The booksellers claimed to be supporting authors’ just and natural right to property. But in fact their aim was to take for themselves what nature had supposedly granted their clients.”

I write in my book that the metaphor of creativity as property — of art as artifact rather than an act — “might be appropriate for land, buildings, ships, and tangible possessions, but is it for such intangibles as creativity, inspiration, information, education, and art? Especially once electronics — from broadcast to digital — eliminated the scarcity of the printed page or the theater seat, one need ask whether property is still a valid metaphor for such a nonrivalrous good as culture.”

Around the world, copyright law and doctrine are being mangled to suit the protectionist ends of those lobbying on behalf of incumbent publishers and producers, who remain flummoxed by the challenges and opportunities of technology, of both the internet and now artificial intelligence. In the context of journalism and news, Germany’s Leistungsschutzrecht or ancillary copyright law, Spain’s recently superseded link tax, Australia’s News Media Bargaining Code, the proposed Journalism Competition and Preservation Act in the US, and lately Canada’s C-18 Online News Act do nothing to protect the public’s interest in informed discourse and, in Canada’s case, will end up harming news consumers, journalists, and platforms alike as Facebook and Google are forced to take down links to news.

I urge the Copyright Office to continue its process of study as exemplified by this request for comments and not to rush into the frenzied discussion in media over artificial intelligence, large language models, and generative AI. It is too soon. Too little is known. Too much is at stake.

The post Copyright and AI and journalism appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2023 09:11

August 21, 2023

A few unpopular opinions about AI

In a conversation with Jason Howell for his upcoming AI podcast on the TWiT network, I came to wonder whether ChatGPT and large language models might give all of artificial intelligence cultural cooties, for the technology is being misused by companies and miscast by media such that the public may come to wonder whether they can ever trust the output of a machine. That is the disaster scenario the AI boys do not account for.

While AI’s boys are busy thumping their chests about their power to annihilate humanity, if they are not careful — and they are not — generative AI could come to be distrusted for misleading users (the companies’ fault more than the machine’s); filling our already messy information ecosystem with the data equivalent of Styrofoam peanuts and junk mail; making news worse; making customer service even worse; making education worse; threatening jobs; and hurting the environment. What’s not to dislike?

Below I will share my likely unpopular opinions about large language models — how they should not be used in search or news, how building effective guardrails is improbable, how we already have enough fucking content in the world. But first, a few caveats:

I do see limited potential uses for synthetic text and generative AI. Watch this excellent talk by Emily Bender, one of the authors of the seminal Stochastic Parrots paper and a leading critic of AI hype, suggesting criteria for acceptable applications: cases where language form and fluency matter but facts do not (e.g., foreign language instruction), where bias can be filtered, and where originality is not required.

Here I explored the idea that large language models could help extend literacy for those who are intimidated by writing and thus excluded from discourse. I am impressed with Google’s NotebookLM (which I’ve seen thanks to Steven Johnson, its editorial director), as an augmentative tool designed not to create content but to help writers organize research and enter into dialog with text (a possible new model for interaction with news, by the way). Gutenberg can be blamed for giving birth to the drudgery of bureaucracy and perhaps LLMs can save us some of the grind of responding to it.

I value much of what machine learning makes possible today — in, for example, Google’s Search, Translate, Maps, Assistant, and autocomplete. I am a defender of the internet (subject of my next book) and, yes, social media. Yet I am cautious about this latest AI flavor of the month, not because generative AI itself is dangerous but because the uses to which it is being put are stupid and its current proprietors are worrisome.

So here are a few of my unpopular opinions about large language models like ChatGPT:

It is irresponsible to use generative AI models as presently constituted in search or anywhere users are conditioned to expect facts and truthful responses. Presented with the empty box on Bing’s or Google’s search engines, one expects at least a credible list of sites relevant to one’s query, or a direct response based on a trusted source: Wikipedia or services providing the weather, stock prices, or sports scores. To have an LLM generate a response — knowing full well that the program has no understanding of fact — is simply wrong.

No news organization should use generative AI to write news stories, except in very circumscribed circumstances. For years now, wire services have used artificial intelligence software to generate simple news stories from limited, verified, and highly structured data — finance, sports, weather — and that works because of the strictly bounded arena in which such programs work. Using LLMs trained on the entire web to generate news stories from the ether is irresponsible, for it only predicts words, it cannot discern facts, and it reflects biases. I endorse experimenting with AI to augment journalists’ work, organizing information or analyzing data. Otherwise, stay away.

The last thing the world needs is more content. This, too, we can blame on Gutenberg (and I do, in The Gutenberg Parenthesis), for printing brought about the commodification of conversation and creativity as a product we call content. Journalists and other writers came to believe that their value resides entirely in content, rather than in the higher, human concepts of service and relationships. So my industry, at its most industrial, thinks its mission is to extrude ever more content. The business model encourages that: more stuff to fill more pages to get more clicks and more attention and a few more ad pennies. And now comes AI, able to manufacture no end of stuff. No. Tell the machine to STFU.

There will be no way to build foolproof guardrails against people making AI do bad things. We regularly see news articles reporting that an LLM lied about — even libeled — someone. First note well that LLMs do not lie or hallucinate because they have no conception of truth or meaning. Thus they can be made to say anything about anyone. The only limit on such behavior is the developers’ ability to predict and forbid everything bad that anyone could do with the software. (See, for example, how ChatGPT at first refused to go where The New York Times’ Kevin Roose wanted it to go and even scolded him for trying to draw out its dark side. But Roose persevered and led it astray anyway.) No policy, no statute, no regulation, no code can prevent this. So what do we do? We try to hold accountable the user who gets the machine to say bad shit and then spread it, just as we would if you printed out nasty shit on your HP printer and posted it around the neighborhood. Not much else we can do.

AI will not ruin democracy. We see regular alarms that AI will produce so much disinformation that democracy is in peril — see a recent warning from Jon Naughton of The Guardian that “a tsunami of AI misinformation will shape next year’s knife-edge elections.” But hold on. First, we already have more than enough misinformation; who’s to say that any more will make a difference? Second, research finds again and again that online disinformation played a small role in the 2016 election. We have bigger problems to address about the willful credulity of those who want to signal their hatreds with misinformation and we should not let tropes of techno moral panic distract us from that greater peril.

Perhaps LLMs should have been introduced as fiction machines. ChatGPT is a nice parlor trick, no doubt. It can make shit up. It can sound like us. Cool. If that entertaining power were used to write short stories or songs or poems and if it were clearly understood that the machine could do little else, I’m not sure we’d be in our current dither about AI. Problem is, as any novelist or songwriter or poet can tell you, there’s little money in creativity anymore. That wouldn’t attract billions in venture capital and the stratospheric valuations that go with it whenever AI is associated with internet search, media, and McKinsey finding a new way to kill jobs. As with so much else today, the problem isn’t with the tool or the user but with capitalism. (To those who would correct me and say it’s late-stage capitalism, I respond: How can you be so sure it is in its last stages?)

Training artificial intelligence models on existing content could be considered fair use. Their output is generally transformative. If that is true, then training machines on content would not be a violation of copyright or theft. It will take years for courts to adjudicate the implications of generative AI on outmoded copyright doctrine and law. As Harvard Law Professor Lawrence Lessig famously said, fair use is the right to hire an attorney. Media moguls are rushing to do just that, hiring lawyers to force AI companies to pay for the right to use news content to train their machines — just as the publishers paid lobbyists to get legislators to pass laws to get search engines and social media platforms to pay to link to news content. (See how well that’s working out in Canada.) I am no lawyer but I believe training machines on any content that is lawfully acquired so it can be inspired to produce new content is not a violation of copyright. Note my italics.

Machines should have the same right to learn as humans; to say otherwise is to set a dangerous precedent for humans. If we say that a machine is not allowed to learn, to read, to extract knowledge from existing content and adapt it to other uses, then I fear it would not be a long leap to declare what we as humans are not allowed to read, see, or know some things. This puts us in the odd position of having to defend the machine’s rights so as to protect our own.

Stopping large language models from having access to quality content will make them even worse. Same problem we have in our democracy: Pay walls restrict quality information to the already rich and powerful, leaving the field — whether that is news or democracy or machine learning — free to bad actors and their disinformation.

Does the product of the machine deserve copyright protection? I’m not sure. A federal court just upheld the US Copyright Office’s refusal to grant copyright protection to the product of AI. I’m just as happy as the next copyright revolutionary to see the old doctrine fenced in for the sake of a larger commons. But the agency’s ruling was limited to content generated solely by the machine and in most cases (in fact, all cases) people are involved. So I’m not sure where we will end up. The bottom line is that we need a wholesale reconsideration of copyright (which I also address in The Gutenberg Parenthesis). Odds of that happening? About as high as the odds that AI will destroy mankind.

The most dangerous prospect arising from the current generation of AI is not the technology, but the philosophy espoused by some of its technologists. I won’t venture deep down this rat hole now, but the faux philosophies espoused by many of the AI boys — in the acronym of Émile Torres and Timnit Gebru, TESCREAL, or longtermism for short — is noxious and frightening, serving as self-justification for their wealth and power. Their philosophizing might add up to a glib freshman’s essay on utilitarianism if it did not also border on eugenics and if these boys did not have the wealth and power they wield. See Torres’ excellent reporting on TESCREAL here. Media should be paying attention to this angle instead of acting as the boys’ fawning stenographers. They must bring the voices of responsible scholars — from many fields, including the humanities — into the discussion. And government should encourage truly open-source development and investment to bring on competitors that can keep these boys, more than their machines, in check.

The post A few unpopular opinions about AI appeared first on BuzzMachine.

 •  0 comments  •  flag
Share on Twitter
Published on August 21, 2023 12:39

Jeff Jarvis's Blog

Jeff Jarvis
Jeff Jarvis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Jeff Jarvis's blog with rss.