Adam Thierer's Blog, page 25
July 26, 2018
The FCC can increase 5G deployment by empowering homeowners
The move to small cells and fixed wireless broadband means states, cities, and the FCC are changing their regulatory approaches. For decades, wireless providers have competed primarily on coverage, which meant building large cell towers all over the country, each one serving hundreds of people. That’s changing. As Commissioner Carr noted,
5G networks will look very different from today’s 4G deployments. 5G will involve the addition of hundreds of thousands of new, small-scale facilities with antennas no larger than a small backpack.
Currently, wireless companies don’t have many good options when it comes to placing these lower-power, higher-bandwidth “small cells.” They typically install small cells and 5G transmitters on public rights-of-way and on utility poles, but there may not be room on poles and attachment fees might be high.
One thing the FCC might consider to stimulate 5G and small cell investment is to dust off its 20 year-old over-the-air-reception-device (OTARD) rules. These little-known rules protect homeowners and renters from unwarranted regulation of TV and broadband antennas placed on their property. If liberalized, the OTARD rules would open up tens of millions of other potential small cell sites–on rooftops, on balconies, and in open fields and backyards around the country.
Background
In the early 1990s, cities and homeowner associations would sometimes prohibit, charge for, or regulate satellite dishes that homeowners or renters installed on their rooftops or balconies. Lawmakers saw a problem and wanted to jumpstart competition in television (cities had authorized cable TV monopolies for decades and cable had over 95% of the pay-TV market).
In the 1996 Telecom Act, then, Congress instructed the FCC to increase TV competition by regulating the regulators. Congress said that state, local, and HOA restrictions cannot impose restrictions that
impair a viewer’s ability to receive video programming services through devices designed for over-the-air reception of television broadcast signals, multichannel multipoint distribution service [MMDS], or direct broadcast satellite services.
With these congressional instructions, the FCC created its OTARD rules, informally known as the “pizza box rule.” Briefly stated, if your TV antenna, satellite TV receiver, or “fixed wireless” antenna is smaller than a large pizza (1 meter diameter–no cell towers in front yards), you are free to install the necessary equipment on property you control, like a yard or balcony. (There are some exceptions for safety issues and historical buildings.) The 1996 law expressly protects MMDS (now called “broadband radio service”), which includes spectrum in the 2.1 GHz, 2.5 GHz, 2.6 GHz, 28 GHz, 29 GHz, and 31 GHz bands. The Clinton FCC expanded the rules to protect, broadly, any antennas that “receive or transmit fixed wireless signals.” You can even install a mast with an antenna that extends up to 12 feet above your roofline.
OTARD reform
The rules protect fixed wireless antennas and could see new life in the 5G world. Carriers are building small cells and fixed wireless primarily to provide faster broadband and “mobile TV” services. Millions of Americans now view their cable and Netflix content on mobile devices and carriers are starting to test mobile-focused pay-TV services. AT&T has Watch TV, T-Mobile is expected to deploy a mobile TV service soon because of its Layer3 acquisition, and reporting suggests that Verizon is approaching YouTube TV and Apple to supply TV for its 5G service.
The FCC’s current interpretation of its OTARD rules doesn’t help 5G and small cell deployment all that much, even though the antennas are small and they transmit TV services. The actual rules don’t say this but the FCC’s interpretation is that their OTARD protections don’t protect antenna “hubs” (one-to-many transmitters like small cells). The FCC liberalized this interpretation in its Massport proceeding and allowed hub antennas on commercial property but did not extend this interpretation for homeowners’ antennas. In short, under the current interpretation, cities and HOAs can regulate, charge for, and prohibit the installation of 5G and small cells on private property.
The FCC should expand its rules to protect the installation of (low power) 5G and small cell hubs on private property. This would directly improve, per the statute, “viewers’ ability to receive video programming services” via wireless. It would have the ancillary effect of improving other wireless services. The prospect of installing small cells on private property, even temporarily, should temper the fees carriers are charged to use the public rights-of-way and poles.
In rural areas, the FCC might also consider modifying the rules to allow masts that extend beyond 12 feet above the roofline. Transmitters even a few feet taller would improve wireless backhaul and coverage to nearby homes, thus increasing rural broadband deployment and IP-based television services.
Wireless trends
OTARD reform is especially timely today because the Wheeler and Pai FCCs have freed up several bands of spectrum and fixed wireless is surging. Fixed wireless and mesh network providers using CBRS and other spectrum bands could benefit from more installation sites, particularly in rural areas. C Spire, for instance, is creating “hub homes” for fixed wireless, and Starry and Rise Broadband are expanding their service areas. CableLabs is working on upgrading its network for mobile and 5G backhaul and cable operators might benefit from OTARD reform and more outside infrastructure.
Modifying the OTARD rules might be controversial but modification directly gives consumers and homeowners more control over improving broadband service in their neighborhood, just as the rules improved TV competition in the past. Courts are pretty deferential when agencies change an interpretation of an existing rule. Further, as the agency said years ago:
The Federal Communications Commission has consistently maintained that it has the ultimate responsibility to determine whether the public interest would be served by construction of any specific antenna tower.
The future of wireless services is densification–putting fiber and small cells all over downtowns and neighborhoods in order to increase broadband capacity for cutting-edge services, like smart glasses for the blind and remote-controlled passenger cars. The OTARD rules and the FCC’s authority over wireless antennas provides another tool to improve wireless coverage and services.
July 19, 2018
Who cares about utility poles? Broadband users should.
Though ubiquitous in urban and rural landscapes, most people barely notice utility poles. Nevertheless, utility poles play a large role in national broadband policy. Improving pole access won’t generate the headlines like billion-dollar spectrum auctions and repeal of Title II Internet regulations, but it’s just as important for improving broadband competition and investment. To that end, the FCC is proposing to create “one-touch-make-ready” rules for FCC-regulated utility poles across the country. I was pleased to see that the FCC will likely implement this and other policy recommendations from the FCC’s Broadband Deployment Advisory Committee.*
To me, one-touch-make-ready an example of useful access regulation and I think it’s likely to succeed at its aims–more broadband competition and investment. Pole access appears to be, using former FCC chief economist Jerry Faulhaber’s phrase, an efficient market boundary. FCC pole access mandates are feasible because the “interface”–physical wires and poles–is relatively simple and regulatory compliance–did the entrant damage existing users? did they provide notice?–is pretty easy to ascertain. Typically, visual inspection will reveal damage and the liable party is usually obvious.
As the FCC says in the proposed order, these proposed modifications and one-touch-make-ready,
put[] the parties most interested in efficient broadband deployment—new attachers—in a position to control the survey and make-ready processes.
Reasonable people (even on the free-market side) will disagree about how to regulate utility pole access. One-touch-make-ready was a controversial proposal and commercial operators have been divided on the issue. In the end, it was not unanimous but the BDAC reached large consensus on the issue. In my view, the FCC struck the right balance in protecting existing companies’ equipment and promoting infrastructure construction and competitive entry.
Some utility pole basics: Utility poles are often owned by a phone company, a utility company, or a city. At the top of utility poles are electric lines. (The FCC is not talking about doing work near the electric lines on top, which is trickier and more dangerous for obvious reasons.) The rule changes here affect the “communications space,” which is midway up the poles and typically has one or several copper, coaxial, or fiber lines strung across.
For decades, the “market” for communications space access was highly regulated but stable. National and local policy encouraged monopoly phone service and cable TV provision and, therefore, entrants rarely sought access to string up lines on utility poles. In the 1990s, however, phone and cable was deregulated and competition became national policy. In the last ten years, as the price of fiber broadband provision has fallen and consumer demand for competitive broadband options has increased, new companies–notably Google Fiber–have needed access to utility poles. The FCC notes in its proposed order that, going forward, “small cell” and 5G deployments will benefit from competitive, lower-cost fiber providers.
The pre-2018 approach to pole attachments, wherein many parties had effective veto rights over new entrants, was creating too many backlogs and discouraging competitive providers from making the investments necessary. The FCC’s proposed rules streamline the process by creating tighter deadlines for other parties to respond to new entrants. The rules also give new entrants new privileges and greater control in constructing new lines and equipment, so long as they notify existing users and don’t damage existing lines.
I’m pleased to see that the Broadband Deployment Advisory Committee’s recommendations are proving useful to the agency. It’s encouraging that this FCC, by taking a weed-whacker to legacy policies regarding spectrum, pole access, and net neutrality, is taking steps to improve broadband in America.
*I’m the vice chair of the Competitive Access working group.
Related research and commentary:
The Importance of Spectrum Access to the Future of Innovation (pdf)
A Truly ‘Open Internet’ Would Be Free of Burdensome FCC Regulation (NRO)
July 18, 2018
The Challenge of Retraining Workers for an Uncertain Future
The White House has announced a new effort to help prepare workers for the challenges they will face in the future. While it’s a well-intentioned effort, and one that I hope succeeds, I’m skeptical about it for a simple reason: It’s just really hard to plan for the workforce needs of the future and train people for jobs that we cannot possibly envision today.
Writing in the Wall Street Journal today, Ivanka Trump, senior adviser to the president, outlines the elements of new Executive Order that President Trump is issuing “to prioritize and expand workforce development so that we can create and fill American jobs with American workers.” Toward that end, the Administration plans on:
establishing a National Council for the American Worker, “composed of senior administration officials, who will develop a national strategy for training and retraining workers for high-demand industries.” This is meant to bring more efficiency and effectiveness to the “more than 40 workforce-training programs in more than a dozen agencies, and too many have produced meager results.”
“facilitat[ing] the use of data to connect American businesses, workers and educational institutions.” This is meant to help workers find “what jobs are available, where they are, what skills are required to fill them, and where the best training is available.”
launching a nationwide campaign “to highlight the growing vocational crisis and promote careers in the skilled trades, technology and manufacturing.”
The Administration also plans on creating a new advisory board of experts to address these issues, and the administration is also “asking companies and trade groups throughout the country to sign our new Pledge to America’s Workers—a commitment to invest in the current and future workforce.” They hope to encourage companies to take additional steps “to educate, train and reskill American students and workers.”
Perhaps some of these steps make sense, and perhaps a few will even help workers deal with the challenges of our more complex, fast-evolving, global economy. But I doubt it.
The reality is, most worker retraining plans are little better than a dice-roll on the professions and job needs of the future. As I noted in my last book as well as in a paper with Andrea O’Sullivan and Raymond Russell, concerns about automation, AI, and robots taking all our jobs have put worker retraining concerns back in the spotlight in a major way. That has led many scholars, pundits, and policymakers to suggest that more needs to be done to address the skills workers will need going forward.
That impulse is completely understandable. But it doesn’t mean we can magically predict the jobs of the future or what skills workers will need to fill them. It’s not that I am opposed to efforts to try to figure out answers to those questions, or perhaps even craft some programs to try to address them (although I agree with my colleague Matt Mitchell that many past worker training programs “seem indistinguishable from corporate welfare.”) But worker retraining or reskilling usually fails because it’s like trying to centrally plan the economy of the future. It’s a fool’s errand.
In my book, I pointed out that, when you look back at past predictions regarding the job needs of the future that we now live it, those predictions were off-the-mark. The fact is, an “expert” writing in the early 1980s about the job needs of the future didn’t even have the vocabulary to describe or understand the jobs of the technological era we now live in. Here’s how I put it in my book:
It’s also worth noting how difficult it is to predict future labor market trends. In early 2015, Glassdoor, an online jobs and recruiting site, published a report on the 25 highest paying jobs in demand today. Many of the job titles identified in the report probably weren’t considered a top priority 40 years ago, and some of these job descriptions wouldn’t even have made sense to an observer from the past. For example, some those hotly demanded jobs on Glassdoor’s list include software architect (#3), software development manager (#4), solutions architect (#6), analytics manager (#8), IT manager (#9), data scientist (#15), security engineer (#16), quality assurance manager (#17), computer hardware engineer (#18), database administrator (#20), UX designer (#21), and software engineer (#23).
Looking back at reports from the 1970s and ’80s published by the US Bureau of Labor Statistics, the federal agency that monitors labor market trends, one finds no mention of these computing and information technology–related professions because they had not yet been created or even envisioned. So, what will the most important and well-paying jobs be 30 to 40 years from now? If history is any guide, we probably can’t even imagine many of them right now.
Of course, as with previous periods of turbulent technological change, many of today’s jobs and business models will be rendered obsolete, and workers and businesses will need to adjust to new marketplace realities. That transition takes time, but as James Bessen points out in his book Learning by Doing, for technological revolutions to take hold and have a meaningful impact on economic growth and worker conditions, large numbers of ordinary workers must acquire new knowledge and skills. But “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” Luckily, however, history also suggests that, time and time again, society has adjusted to technological change and the standard of living for workers and average citizens alike improve at the same time.
Bessen’s point is really important, and too often forgotten in discussions about reskilling for the future. When I think about the sort of skills that I picked up the early 1980 as a teenager using a clunky old Commodore 128 computer, or that my own teenage kids pick up today just by tinkering with their gadgets (computers, smartphones, gaming consoles, etc), I think about how those skills were not centrally planned for by anyone. It was mostly just learning by doing. A lot of the coding skills people use today they learned by trial and error and without taking any course to do so.
In his book, Bessen uses the example of bank tellers to illustrate how convention wisdom about future trends is often wildly off the mark. With the rise of ATMs a few decades ago, many thought the days of bank tellers were numbered. But Bessen’s research shows that we have more bank tellers today than we did 40 years ago because once the ATMs could handle the menial tasks of counting and distributing money, the tellers were freed up to do other things.
I’m not saying we can just leave the future of workers to chance and hope everyone can learn on the fly like that. Some government programs will be needed, and many could even help. But let’s not kid ourselves into thinking that we somehow have a crystal ball that we can stare into and, like a technological Nostradamus, somehow divine the jobs and skills of a radically uncertain future.
Our better hope lies in creating an innovation culture that is open to new types of ideas, jobs, and entrepreneurialism. We might better serve the workers of the future by ensuring that they are not encumbered by mountains of accumulated red tape in the form of archaic rules, licenses, permitting schemes, and other obstacles to progress. My colleague Michael Farren also testified last year and offered some concrete near-term reform proposals to help bridge the skills gap by “revis[ing] the federal tax code to allow tax deductions for all forms of productivity-enhancing investments, including investment in training workers to perform new jobs,” and also addressing government aid programs “that might be lowering the supply of workers, thereby contributing to the lack of skilled workers available.”
Glassdoor, “25 Highest Paying Jobs In Demand,” Glassdoor Blog, February 17, 2015, http://www.glassdoor.com/blog/highest....
John Tschetter, “An Evaluation of BLS’ Projections of 1980 Industry Employment,” Monthly Labor Review, August 1984, http://www.bls.gov/opub/mlr/1984/08/a....
Bessen, Learning by Doing: The Real Connection between Innovation, Wages, and Wealth (New Haven, CT: Yale University Press, 2015), p. 223.
July 12, 2018
AT&T Gets What It Deserves
A government appeal of a court decision approving AT&T’s acquisition of Tim Warner is a joke. But maybe it is not surprising when you consider what AT&T management has been up to.
AT&T used to be a power house in Washington. It now can’t seem to lobby it’s way out of a brown paper bag.
AT&T’s longtime chief representative in Washington—Jim Ciccone—was brilliant. AT&T’s managers and investors have no idea how much Ciccone accomplished on their behalf. His successor—Pat Quinn—was a brilliant regulatory lawyer. Quinn was absolutely the best person that could possibly represent you before the Federal Communications Commission. Unfortunately, Quinn couldn’t see the big picture, and he flamed out as Ciccone’s succesor.
I have no idea who represents AT&T in Washington at this time. As a shareholder, I believe AT&T management is negligent.
It is no surprise to me that the Department of Justice is appealing the court decision approving the AT&T/Time Warner merger—because AT&T is AWOL in Washington.
P.S. I want to credit my former boss, former Senator Bob Packwood of Oregon–chairman of the Senate Commerce Committee in the early 80’s–for the brown paper bag metaphor. He didn’t apply it to AT&T, but I think it fits now.
July 11, 2018
The Online Public Sphere or: Facebook, Google, Reddit, and Twitter also support positive communities
In cleaning up my desk this weekend, I chanced upon an old notebook and like many times before I began to transcribe the notes. It was short, so I got to the end within a couple of minutes. The last page was scribbled with the German term Öffentlichkeit (public sphere), a couple sentences on Hannah Arendt, and a paragraph about Norberto Bobbio’s view of public and private.
Then I remembered. Yep. This is the missing notebook from a class on democracy in the digital age.
Serendipitously, a couple of hours later, William Freeland alerted me to Franklin Foer’s newest piece in The Atlantic titled “The Death of the Public Square.” Foer is the author of “World Without Mind: The Existential Threat of Big Tech,” and if you want a good take on that book, check out Adam Thierer’s review in Reason.
Much like the book, this Atlantic piece wades into techno ruin porn but focuses instead on the public sphere:
Nobody designed the public sphere from a dorm room or a Silicon Valley garage. It just started to organically accrete, as printed volumes began to pile up, as liberal ideas gained currency and made space for even more liberal ideas. Institutions grew, and then over the centuries acquired prestige and authority. Newspapers and journals evolved into what we call media. Book publishing emerged from the printing guilds, and eventually became taste-making, discourse-shaping enterprises.
In recent years, this has been eviscerated by Facebook and Google, Foer continues,
It took centuries for the public sphere to develop—and the technology companies have eviscerated it in a flash. By radically remaking the advertising business and commandeering news distribution, Google and Facebook have damaged the economics of journalism. Amazon has thrashed the bookselling business in the U.S. They have shredded old ideas about intellectual property—which had provided the economic and philosophical basis for authorship.
Philosopher Jurgen Habermas, who is cited throughout the piece, coined the term Öffentlichkeit, which has been translated into English as public sphere. However, Habermas used the term to describe not only the “process by which people articulate the needs of society with the state” but also the “public opinion needed to legitimate authority in any functioning democracy.” So, the public bridges the practices of democracy with mass communication methods like broadcast television, newspapers, and magazines.
While Foer doesn’t explore it fully, the public sphere forms a basis for legitimate authority, which in turn implicates political power.
Nancy Fraser provided the classic critique of public sphere because even in Habermas’ own conception of the term, countless voices were excluded from the public sphere. “This network of clubs and associations – philanthropic, civic, professional, and cultural – was anything but accessible to everyone,” Fraser explained. “On the contrary, it was the arena, the training ground and eventually the power base of a stratum of bourgeois men who were coming to see themselves as a ‘universal class’ and preparing to assert their fitness to govern.”
In parallel to the public sphere, Fraser observed that numerous counterpublics formed “where members of subordinated social groups invent and circulate counter discourses to formulate oppositional interpretations of their identities, interests, and needs.” And it is through these oppositional interpretations that the public conversation around politics changed. Think about civil rights and the environmental movement, and even deregulation as examples.
Foer might be right to focus on the public sphere, but I’m not sure his analysis goes far enough. He explains:
This assault on the public sphere is an assault on free expression. In the West, free expression is a transcendent right only in theory—in practice its survival is contingent and tenuous. We’re witnessing the way in which public conversation is subverted by name-calling and harassment. We can convince ourselves that these are fringe characteristics of social media, but social media has implanted such tendencies at the core of the culture. They are in fact practiced by mainstream journalists, mobs of the well meaning, and the president of the United States. The toxicity of the environment shreds the quality of conversation and deters meaningful participation in it. In such an environment, it becomes harder and harder to cling to the idea of the rational individual, formulating opinions on the basis of conscience. And as we lose faith in that principle, the public will lose faith in the necessity of preserving the protections of free speech.
But Foer’s lament, if it is about the public sphere, is ultimately about the old friction, between the public sphere and counterpublics, in new form. Foer’s worries about theological zealots, demagogic populists, avowed racists, trollish misogynists, filter bubbles, the false prophets of disruption, and invisible manipulation, just to name a couple techno-golems, echoes the “counter discourses [that] formulate oppositional interpretations” of Fraser.
It is all quite inhumane, yes.
But let’s also remember that Facebook and Google and Reddit and Twitter also support humane counterpublics. Like when chronic pain sufferers find solace on Facebook. Or when widows vent, rage, laugh and cry without judgement through the Hot Young Widows Club. Let’s also not forgot that Reddit, while sometimes being a place of rage and spite, is also where a weight lifter with cerebral palsy became a hero and where those with addiction can find healing.
Let’s also not forget that most Americans think these companies have on the whole been beneficial in their lives. And that most of us don’t post political content on either Facebook or Twitter. And that people are the least likely to get their news from social networking sites compared to every other sources.
Focusing on democracy and on politics tightens the critical vision, causing us to miss the multiplicities of experiences online. Yet those experiences, those counterpublics are just as representative. They constitute a reality far more real than those constructed by critics.
July 10, 2018
Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions
I’ve been working on a new book that explores the rise of evasive entrepreneurialism and technological civil disobedience in our modern world. Following the publication of my last book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, people started bringing examples of evasive entrepreneurialism and technological civil disobedience to my attention and asked how they were related to the concept of permissionless innovation. As I started exploring and cataloging these cases studies, I realized I could probably write an entire book about these developments and their consequences.
Hopefully that book will be wrapped up shortly. In the meantime, I am going to start rolling out some short essays based on content from the book. To begin, I will state the general purpose of the book and define the key concepts discussed therein. In coming weeks and months, I’ll build on these themes, explain why they are on the rise, explore the effect they are having on society and technological governance efforts, and more fully develop some relevant case studies.
Key Concepts Defined
Evasive entrepreneurs – Innovators who don’t always conform to social or legal norms.
Regulatory entrepreneurs – Innovators who “are in the business of trying to change or shape the law” and are “strategically operating in a zone of questionable legality or breaking the law until they can (hopefully) change it.” (Pollman & Barry)
Technologies of freedom – Devices and platforms that let citizens openly defy (or perhaps just ignore) public policies that limit their liberty or freedom to innovate.
The “pacing problem” – The gap between the ever-expanding frontier of technological possibilities and the ability of governments to keep up with the pace of those changes.
Technological civil disobedience – The technologically-enabled refusal of individuals, groups, or businesses to obey certain laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant.
Innovation arbitrage – The movement of ideas, innovations, or operations to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. It can also be thought of as a form of “jurisdictional shopping” and can be facilitated by “competitive federalism.”
Permissionless innovation – As a general concept, it refers to Rear Admiral Grace Hopper’s notion that quite often, “It’s easier to ask forgiveness than it is to get permission.” As a policy vision, it refers to the idea that experimentation with new technologies and business models should generally be permitted by default. Permissionless innovation comes down to a general acceptance of change and risk-taking.
Themes of the Book
The book documents how evasive entrepreneurs are using new technological capabilities to circumvent traditional regulatory systems, or at least put pressure on public policymakers to reform or selectively enforce laws and regulation that are outmoded, inefficient, or illogical. Evasive entrepreneurs pursue a strategy of “permissionless innovation” in both the business world and the political arena. In essence, they live out the adage that, “it is easier to ask forgiveness than it is to get permission” by creating new products and services without necessarily receiving the blessing of public officials before doing so.
Evasive entrepreneurs are taking advantage of the growth of various technologies of freedom and the corresponding “pacing problem” to create new goods and services or just decide how to live a life of their own choosing. We can think of this phenomenon as “technological civil disobedience.” The technologies of freedom that facilitate this sort of civil disobedience include common tools like smartphones, ubiquitous computing, and various new media platforms, as well as more specialized technologies like cryptocurrencies and blockchain-based services, private drones, immersive tech (like virtual reality), 3D printers, the “Internet of Things,” and sharing economy platforms and services. But that list just scratches the surface.
When innovators and consumers use new tools and technological capabilities to pursue a living, enjoy new experiences, or enhance their lives and the lives of others, they often disrupt legal or social norms in the process. While that can raise serious legal and ethical concerns, evasive entrepreneurialism and technological civil disobedience can have positive upsides for society by:
expanding the range of life-enriching—and even life-saving—innovations available to society;
helping citizens pursue a life of their own choosing—both as creators looking for the freedom to earn a living, and as consumers looking to discover and enjoy important new goods and services; and,
providing a meaningful, ongoing check on government policies and programs that all too often have outlived their usefulness or simply defy common sense.
For those reasons, my book will argue that we should accept—and often even embrace—a certain amount of evasive entrepreneurialism and technological civil disobedience. I am particularly excited by the last point. In an age when many of the constitutional limitations on government power are being ignored or unenforced, innovation itself can act as a powerful check on the power of the state and help serve as a protector of important human liberties. Over the past century, both legislative and judicial “checks and balances” in the United States have been eroded to the point where they now exist mostly in name only. While we should never abandon efforts to use democratic and constitutional means of limiting state power—especially in the courts, where meaningful reforms are still possible—the ongoing evolution of technology can provide another way of keeping governments in line by forcing public officials to constrain their worse tendencies and undo past mistakes. If they fail to, they risk losing the allegiance of their more technologically-empowered citizenry.
But evasive entrepreneurialism and technological civil disobedience can have serious downsides, too. We should explore how to address the challenges associated with this more turbulent and sometimes dangerous world. In doing so, however, technological critics and public policymakers should also appreciate how once any particular innovation genie is out of its bottle, it will be increasingly difficult to stuff it back in. Worse yet, attempts to do so can often result in a “compliance paradox,” in which tighter rules lead to increased legal evasion and intractable enforcement challenges. Thus, more flexible and adaptive technological governance mechanisms will be needed.
In coming essays, I will discuss some prominent examples of these trends that are developed at length in my book, I will also do a deeper dive into some of the interesting ways governments are responding to these developments using what Phil Weiser refers to as “entrepreneurial administration,” or what others call “soft law” mechanisms. As Weiser notes, “[t]he traditional model of regulation is coming under strain in the face of increasing globalization and technological change,” and, therefore, governments must think and act differently than they did in the past. And they are already doing so. Even in an age of expanding evasive entrepreneurialism and technological civil disobedience, governments can shape the evolution of technology. But that cannot be done using the previous era’s technocratic, overly-bureaucratic, and top-down regulatory playbook. New policies and procedures will be needed for a new era.
July 9, 2018
GDPR Compliance: The Price of Privacy Protections
In preparation for a Federalist Society teleforum call that I participated in today about the compliance costs of the EU’s General Data Protection Regulation (GDPR), I gathered together some helpful recent articles on the topic and put together some talking points. I thought I would post them here and try to update this list in coming months as I find new material. (My thanks to Andrea O’Sullivan for a major assist on coming up with all this.)
Key Points:
GDPR is no free lunch; compliance is very costly
All regulation entails trade-offs, no matter how well-intentioned rules are
$7.8 billion estimated compliance cost for U.S. firms already
Punitive fees can range from €20 million to 4 percent of global firm revenue
Vagueness of language leads to considerable regulatory uncertainty — no one knows what “compliance” looks like
Even EU member states do not know what compliance looks like: 17 of 24 regulatory bodies polled by Reuters said they were unprepared for GDPR
GDPR will hurt competition & innovation; favors big players over small
Google, Facebook & others beefing up compliance departments. (“ EU official, Vera Jourova: “They have the money, an army of lawyers, an army of technicians and so on.”)
Smaller firms exiting or dumping data that could be used to provide better, more tailored services
PwC survey found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.
Before GDPR, half of all EU ad spend went to Google. The first day after it took effect, an astounding 95 percent went to Google.
In essence, with the GDPR, the EU is surrendering on the idea of competition being possible going forward
The law will actually benefit the same big companies that the EU has been going after on antitrust grounds. Meanwhile, the smaller innovators and innovations will suffer.
GDPR likely to raise costs to consumers, or diminish choice/quality
Consumers care about privacy, but they also care about choice, convenience, and low-cost services
The modern data-driven economy has given consumers access to an unparalleled cornucopia of information and services and it is remarkable how much of that content and how many of those services are offered to the public at no charge to them. That’s a real benefit.
But if you take all the data out of the Data Economy, you won’t have much of an economy left
“Many organizations will pass these costs on to consumers either by erecting paywalls or forcing users to view more ads.”
Websites blacked out post GDPR: Instapaper, Los Angeles Times, Chicago Tribune (all Tronc- and Lee Enterprises-owned media platforms), A&E Networks websites.
“EU-only” web experience: stripped down websites without illustration or images. NPR and USA Today.
Washington Post is charging for a more expensive GDPR compliant subscription.
GDPR hurts global flow of information; worsens problem of data localization
Rules only allow data to move to jurisdictions that offer an adequate level of protection
Cloud computing? Cloud architects are building costly new infrastructure that can isolate and inspect EU data to ensure it is not “sent” to the wrong jurisdiction.
Another step toward a more “bordered” Internet
Likely to just create more walled gardens
Max Schrems: “Unfortunately data localization is probably the best solution right now. It’s not really a solution that appeals to me a lot, but I think we need data localization for other reasons anyways, like load times and so on.”
Roundabout way to impose tariffs? Data-based firms are largely external to EU.
GDPR doesn’t solve bigger problem of government access to data
EU Data Retention Directive: third parties must keep data for law enforcement for two years (passed after terrorist attacks).
EU member states often have no FISA-like body overseeing government wiretap requests. France and the UK have no court apparatus governing surveillance — instead issued directly by administrative bodies. In Germany, their FBI equivalent can install a “Federal Trojan” virus directly into third party platforms without their knowledge.
GDPR doesn’t really move the needle much in terms of real privacy protection
heavy-handed, top-down regulatory regimes don’t always accomplish their goals when it comes to privacy
what consumers need is new competitive options and privacy innovations
Unfortunately, the world won’t get the new choices we need if regulations like the GDPR essentially punish them with regulatory compliance costs that only the largest current incumbents can possibly absorb
Related Research & Articles:
Oliver Smith, “The GDPR Racket: Who’s Making Money From This $9bn Business Shakedown,” Forbes, May 2, 2018, https://www.forbes.com/sites/oliversmith/2018/05/02/the-gdpr-racket-whos-making-money-from-this-9bn-business-shakedown/#44d6f57b34a2
John Battelle, “How GDPR Kills The Innovation Economy,” NewCoShift, May 25, 2018, https://shift.newco.co/amp/p/844570b70a7a.
Daniel Lyons, “GDPR: Privacy as Europe’s tariff by other means?” AEIdeas, July 3, 2018, https://www.aei.org/publication/gdpr-privacy-as-europes-tariff-by-other-means/
Will Rinehart, “The Law & Economics of ‘Owning Your Data,’” American Action Forum, Insight, April 10, 2018, https://www.americanactionforum.org/insight/law-economics-owning-data/#ixzz5Klci9G
Adam Thierer, “How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google,” Technology Liberation Front, April 25, 2018, https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/
Andrea O’ Sullivan, “The EU’s New Privacy Rules Are Already Causing International Headaches,” Reason, June 12, 2018, https://reason.com/archives/2018/06/12/the-eus-new-privacy-rules-are-already-ca
Alice Calder & Anne Hobson, “Data Privacy at a Price,” Medium, May 25, 2018, https://readplaintext.com/data-privacy-at-a-price-398634622f8b
Daisuke Wakabayashi and Adam Satariano, “How Looming Privacy Regulations May Strengthen Facebook and Google,” New York Times, April 23, 2018,
Sam Schechner and Nick Kostov, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” Wall Street Journal, April 23, 2018, https://www.wsj.com/articles/how-euro...
Nick Kostov and Sam Schechner, “Google Emerges as Early Winner From Europe’s New Data Privacy Law,” Wall Street Journal, May 31, 2018, https://www.wsj.com/articles/eus-strict-new-privacy-law-is-sending-more-ad-money-to-google-1527759001 https://www.wsj.com/articles/eus-strict-new-privacy-law-is-sending-more-ad-money-to-google-1527759001
Daniel Castro & Michael McLaughlin, “Why the GDPR Will Make Your Online Experience Worse,” Fortune, May 23, 2018, http://fortune.com/2018/05/23/gdpr-co...
Brent Skorup & Jennifer Huddleston Skees, “It’s Not About Facebook; It’s about the Next Facebook,” Real Clear Policy, June 01, 2018, https://www.realclearpolicy.com/articles/2018/06/01/its_not_about_facebook_its_about_the_next_facebook_110654.html
Jennifer Skees & Jordan Reimschisel, “GDPR and Me: how the EU data rules could impact genetic testing,” Medium, May 18, 2018, https://readplaintext.com/gdpr-and-me-how-the-eu-data-rules-could-impact-genetic-testing-851494e55dd3
Kid Vid Rules Ripe for Review
A group of lawmakers is asking the Federal Communications Commission to maintain the agency’s 27 year old “Kid Vid” rules in their “current form,” rather than open a proceeding to evaluate whether the rules can be improved or are even still necessary.
The rules were enacted by the FCC pursuant to the Children’s’ Television Act of 1990—in the analog era, when digital technologies were just starting to be deployed, and the same year that initial steps were being taken to privatize the Internet and open it for commercial use. A lot has changed since the Act was passed.
The Act set limits on advertising that exceeded what the networks had been running in the absence of regulation, and led to “unintended consequences,” including a decline in locally produced children’s’ programming and an increase in “educationally weaker” network programming, according to a 1998 study.
There are some proposals for common sense reforms, including allowing multicasting stations to satisfy their obligation to air three hours of children’s’ programming per week either on their main program stream or one one of their newer program streams (which are just as easy to access)—broadcasters get no credit for children’s’ programming that doesn’t run on the main stream, so they have no regulatory incentive to expand their children’s’ programming.
Another common sense proposal would be to allow regularly-scheduled non-weekly series, short series, specials, programs and segments shorter than 30 minutes and PSAs to count toward the three hour limit.
Broadcasters could also be given more scheduling flexibility. Right now, with rare exception, the 30-minute programs that qualify have to run on a weekly basis in the same time slot.
The lawmakers object to these common sense reforms, arguing that low income families that lack access to pay-TV and online streaming options would be left with fewer opportunities to provide their kids with educational programming. (According to Nielsen data, the number of households: 1) with a child between the ages of two and 17, and 2) without cable or Internet access from April to May was one half of one percent.) But an FCC review of the Kid Vid rules isn’t about reducing opportunities, it’s about adapting regulation to the realities of both the marketplace (more sources of video content) and viewing habits (it’s not just about appointment viewing these days, on-demand viewing and binge watching are also popular) as they have evolved in the 27 years since the Act was passed.
The FCC is scheduled to consider opening a proceeding to reviewing the Kid Vid rules at it’s July 12th open meeting later this week.
July 6, 2018
Did The Supreme Court Get The Market Definition Correct In The Amex Case?
The Supreme Court is winding down for the year and last week put out a much awaited decision in Ohio v. American Express. Some have rung the alarm with this case, but I think caution is worthwhile. In short, the Court’s analysis wasn’t expansive like some have claimed, but incomplete. There are a lot of important details to this case and the guideposts it has provided will likely be fought over in future litigation over platform regulation. To narrow the scope of this post, I am going to focus on the market definition question and the issue of two-sided platforms in light of the developments in the industrial organization (IO) literature in the past two decades.
Just to review, Amex centers on what is known as anti-steering provisions. These provisions limit merchants who take the credit card payment from implying a preference for non-Amex cards; dissuading customers from using Amex cards; persuading customers to use other cards; imposing any special restrictions, conditions, disadvantages, or fees on Amex cards; or promoting other cards more than Amex. Importantly, these provisions never limited merchants from steering customers toward debit cards, checks, or cash.
In October 2010, the Department of Justice (DoJ) and several states sued Amex, Visa, and Mastercard for these contract provisions, and Amex was the only one among the three to take it to court. Initially, the District Court ruled in favor of the DoJ and states, explaining that the credit card platforms should be treated as two separate markets, one for merchants and one for cardholders. In that analysis, the court cleaved off the merchant side and declared the anti-steering provisions as being anticompetitive under Section 1 of the Sherman Act.
On appeal, the Court of Appeals for the Second Circuit reversed that decision because “without evidence of the [anti-steering provisions’] net effect on both merchants and cardholders, the District Court could not have properly concluded that the [provisions] unreasonably restrain trade in violation” of Section 1 of the Sherman Act. The Department of Justice petitioned the Appeals Court to reconsider the case en banc, but that was rejected and then headed to the Supreme Court.
The Supreme Court agreed with this two-sided theory as “credit-card networks are best understood as supplying only one product—the transaction—that is jointly consumed by a cardholder and a merchant.” Even though the DoJ was able to show that the provisions did increase merchant fees, “evidence of a price increase on one side of a two-sided transaction platform cannot, by itself, demonstrate an anticompetitive exercise of market power.” To prove this, the DoJ would have to prove that Amex increased of the cost of credit-card transactions above a competitive level, reduced the number of credit-card transactions, or otherwise stifled competition in the two-sided credit-card market.
The decision only briefly mentions why this is important, so consider a platform with two sides, users and advertisers. If users experience an increase in price or a reduction in quality, then they are likely to exit or use the platform less. Yet, advertisers are on the other side because they can reach users. So in response to the decline in user quality, advertiser demand will drop even if the ad prices stay constant. The result echoes back. When advertisers drop out, the total amount of content also recedes and user demand falls because the platform is less valuable to them. Demand is tightly integrated between the two side of the platform. Changes in user and advertiser preferences have far outsized effects on the platforms because each side responds to the other. In other words, small changes in price or quality tends to be far more impactful in chasing off both groups from the platforms as compared to one-sided goods. In the economic parlance, these are called demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazine price changes confirms this theory.
In the last two decades, economics has been adapting to the insights and the challenges of two-sided markets. In the case of a one-sided business, like a laundromat or a mining company, there is one downstream or upstream consumer, so demand is fairly straightforward. But platforms are more complex since value must be balanced across the different participants in a platform, which leads to demand interdependencies.
In an article cited in the decision, economists David Evans and Richard Schmalensee explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments.
While it didn’t employ the language of demand interdependencies, the Court did agree with that general assessment:
To be sure, it is not always necessary to consider both sides of a two-sided platform. A market should be treated as one sided when the impacts of indirect network effects and relative pricing in that market are minor. Newspapers that sell advertisements, for example, arguably operate a two-sided platform because the value of an advertisement increases as more people read the newspaper. But in the newspaper-advertisement market, the indirect networks effects operate in only one direction; newspaper readers are largely indifferent to the amount of advertising that a newspaper contains. Because of these weak indirect network effects, the market for newspaper advertising behaves much like a one-sided market and should be analyzed as such.
Why does this bit matter?
In a piece in the New York Times in April, Law scholar Lina Khan worried that this case would “effectively [shield] big tech platforms from serious antitrust scrutiny.” Law professor Tim Wu followed up with an op-ed just this past week in the Times expressing similar concern,
To reach this strained conclusion, the court deployed some advanced economics that it seemed not to fully understand, nor did it apply the economics in a manner consistent with the goals of the antitrust laws. Justice Stephen Breyer’s dissent mocks the majority’s economic reasoning, as will most economists, including the creators of the “two-sided markets” theory on which the court relied. The court used academic citations in the worst way possible — to take a pass on reality.
Respectfully, I have to disagree with Wu’s assessment and Khan’s worries. Both Google and Facebook more evidently fall into the newspaper category than the payments category under the majority’s opinion. Moreover, the opinion didn’t define what “weak indirect network effects” actually means in practice, so this case doesn’t leave Google and Facebook off the hook by any means.
How the Court reached that conclusion is worth exploring, however.
In contrast to newspapers, credit card payment platforms “cannot make a sale unless both sides of the platform simultaneously agree to use their services,” so, “two-sided transaction platforms exhibit more pronounced indirect network effects and interconnected pricing and demand.” The Court seems to connect two-sidedness with the simultaneity requirement. On this front, Wu is correct. They didn’t seem to fully understand the economic reasoning. It isn’t the simultaneous nature of credit cards that makes them two-sided markets, but their demand interdependencies. Newspapers also have strong demand interdependencies even though they may not feature the simultaneity of credit cards. Yet, the Court was correct in defining the market as a transactional one, where cardholders and merchants are intimately connected.
That being said, Breyer’s economic reasoning isn’t any sharper than the majority’s:
But while the market includes substitutes, it does not include what economists call complements: goods or services that are used together with the restrained product, but that cannot be substituted for that product. See id., ¶565a, at 429; Eastman Kodak Co. v. Image Technical Services, Inc., 504 U. S. 451, 463 (1992). An example of complements is gasoline and tires. A driver needs both gasoline and tires to drive, but they are not substitutes for each other, and so the sale price of tires does not check the ability of a gasoline firm (say a gasoline monopolist) to raise the price of gasoline above competitive levels. As a treatise on the subject states: “Grouping complementary goods into the same market” is “economic nonsense,” and would “undermin[e] the rationale for the policy against monopolization or collusion in the first place.” 2B Areeda & Hovenkamp ¶565a, at 431.
Here, the relationship between merchant-related card services and shopper-related card services is primarily that of complements, not substitutes. Like gasoline and tires, both must be purchased for either to have value. Merchants upset about a price increase for merchant related services cannot avoid that price increase by becoming cardholders, in the way that, say, a buyer of newspaper advertising can switch to television advertising or direct mail in response to a newspaper’s advertising price increase.
Breyer makes a bit of a mess when it comes to the idea of demand complementarity. It isn’t the case that “both must be purchased for either to have value.” That is perfect complementarity, which is rare. Rather, when the price of gasoline increases, then the demand for tires is likely to decrease as well. However, it doesn’t need to run the other way. When the price of tires decreases, the demand for gasoline doesn’t typically inch up. This kind of asymmetric demand relationship is counter to the kind of relationship on platforms where demand in linked on both sides.
Still, Breyer buries the lede. Attributing a price increase to firms in the tire market might be wrong if demand fluctuations in the adjacent gasoline market partially caused those prices changes. In other words, the reason why complementary demand matters in the first place is to ensure that the court’s analysis is correct. Going back to Evans and Schmalensee, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. You get the assessments wrong.
To his credit, Breyer does rightly point out the thin definition offered by the majority:
I take from that definition that there are four relevant features of such businesses on the majority’s account: they (1) offer different products or services, (2) to different groups of customers, (3) whom the “platform” connects, (4) in simultaneous transactions.
Having simultaneous transactions isn’t the defining feature of two-sidedness and if the lower courts come to rely on this feature to define platforms, then some assessments of competitive effects are likely to be wrong.
Amex offers up a lot for the antitrust community to consider, but in key ways, the decision is incomplete. Importantly, the Court didn’t address the validity of many new analytical tools that have popped up in the past decade to understand platform market power. Take a quick glance at the papers cited in the majority opinion and you will notice how many of references dates from after 2010 when this case was first brought. In other words, Amex hardly shuts the door for future litigation.
June 28, 2018
What We Learn From Past Government-Imposed Corporate Breakups Is That They Don’t Work
Voices from all over the political and professional spectrum have been clamoring for tech companies to be broken up. Tech investor machine learning pioneer Yoshua Bengio, NYU professor Scott Galloway, and even Marco Rubio’s 2016 presidential digital director have all suggested that tech companies should be forcibly separated. So, I took a look at some of the past efforts in a new survey of corporate breakups and found that they really weren’t all that effective at creating competitive markets.
Although many consider Standard Oil and AT&T as classic cases, I think United States v. American Tobacco Company is far more instructive.
Like Standard Oil, the American Tobacco Company was organized as a trust and came to acquire nearly 75 percent of the total market by buying both the Union Tobacco Company and the Continental Tobacco Company. But unlike Standard Oil, as soon as these companies were bought, they were integrated within American Tobacco. In 1908 the federal government filed and eventually won a lawsuit under the Sherman Act, which dissolved the trust into three companies, which in theory matched the original three companies.
Yet, the breakup wasn’t as easy as simply splitting the larger company into its original three companies, since the successor companies had intertwined processes. A single purchasing department managed the leaf purchasing. Processing plants has been assigned to specific products without any concern for their previous ownership. For eight months over tense negotiations, the government pulled apart factories, distribution and storage facilities, and name brands. Office by office, the company was pulled apart by government fiat.
Historian Allan M. Brandt had this to say in The Cigarette Century,
It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.
While some might think that breaking up companies would be a clean operation, American Tobacco suggests the opposite. And I’m not alone in this assessment. Here is what Robert Crandall had to say a couple of years back in a piece for the Brookings Institution:
[W]ith one exception, the breakup of AT&T in 1984, there is very little evidence that such relief is successful in increasing competition, raising industry output, and reducing prices to consumers. The exception turns out to be a case of overkill because the same results could have been obtained through a simple regulatory rule, obviating the need for vertical divestiture of AT&T.
In other words, this method simply does not achieve competitive markets.
If you’re interested in the longer piece, you can find it over at American Action Forum.
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
