Adam Thierer's Blog, page 28

March 5, 2018

Governing Virtual Reality Social Spaces

“You don’t gank the noobs” my friend’s brother explained to me, growing angrier as he watched a high-level player repeatedly stalk and then cut down my feeble, low-level night elf cleric in the massively multiplayer online roleplaying game World of Warcraft. He logged on to the server to his “main,” a high-level gnome mage and went in search of my killer, carrying out two-dimensional justice. What he meant by his exclamation was that players have developed a social norm banning the “ganking” or killing of low-level “noobs” just starting out in the game. He reinforced that norm by punishing the overzealous player with premature annihilation.


Ganking noobs is an example of undesirable social behavior in a virtual space on par with cutting people off in traffic or budging people in line. Punishments for these behaviors take a variety of forms, from honking, to verbal confrontation, to virtual manslaughter. Virtual reality social spaces, defined as fully artificial digital environments, are the newest medium for social interaction. Increased agency and a sense of physical presence within a VR social world like VRChat allows users to more intensely experience both positive and negative situations, thus reopening the discussion for how best to govern these spaces.



When the late John Perry Barlow, the founder of the Electronic Frontier Foundation, published his declaration of the Independence of Cyberspace in 1996, humanity stood on the frontier of an online world bereft of physical borders and open to new emergent codes of conduct. He wrote, “I declare the global social space we are building to be naturally independent of the tyrannies [governments] seek to impose on us.” He also stressed the role of “culture, ethics and unwritten codes” in governing the new social society where the First Amendment served as the law of the virtual land. Yet, Barlow’s optimism about the capacity of users to build a better society online stands in stark contrast to current criticisms of social platforms as cesspools of misinformation, extremism, and other forms of undesirable behavior.


As the result of VRChat’s largely open-ended design and its wide user base from the PC and headset gaming communities, there is a broad spectrum of user behavior.  On one hand, users experienced virtual sexual harassment and the incessant trolling of mobs of poorly rendered “echidna” consistent with the Ugandan Knuckles meme. However, VRChat is also the source of creativity and positive experience including collective concerts and dance parties. When a player suffered a seizure in VRChat, players stopped and waited to make sure he was okay and sanctioned other players who were trying to make fun of the situation. VRChat’s response to social discord provides a good example of governance in virtual spaces and how layers of governance interact to improve user experiences.


Governance is the process of decision-making among stakeholders involved in a collective problem that leads to the production of social norms and institutions. In virtual social spaces such as VRChat, layers of formal and informal governance are setting the stage for norms of behavior to emerge. The work of political scientist Elinor Ostrom provides a framework through which to understand the evolution of rules to solve social problems. In her research on governing a common resource, she emphasized the importance of including multiple stakeholders in the governing process, instituting a mechanism for dispute resolution and sanctioning, and making sure the rules and norms that emerge are tailored to the community of users. She wrote, “building trust in one another and developing institutional rules that are well matched to the ecological systems being used are of central importance for solving social dilemmas.” Likewise, the governance structure that emerge in VRChat is game-specific and dependent on the enforcement of explicit formal and informal laws, physical game design characteristics, and social norms of users. I delve into each layer of governance in turn.


At the highest level, the U.S. government passed formal laws and policies that affect virtual social spaces. For example, the Computer Fraud and Abuse Act governs computer-related crimes and prohibits unauthorized access to user’s accounts. Certain types of content, such as child pornography, are illegal under federal law. At the intersection of VR video games and intellectual property law, publicity rights govern the permissions process for using celebrities’ likenesses in an avatar. Trademark and copyright laws determine limitations on what words, phrases, symbols, logos, videos or music can be reproduced in VR and what is considered “fair use.”


Game designers and gaming platforms can also employ an explicit code of conduct that goes beyond formal federal laws and policies. For example, VRChat’s code of conduct details proper Mic etiquette and includes rules about profanity, sexual conduct, self-promotion and discrimination. Social platforms rely on a team of enforcers. VRChat has a moderation team that monitors virtual worlds constantly. External reviewers look at flagged content and in-game bouncers monitor behavior in real time and remove the bad eggs.


By virtue of their technical decisions, game designers also govern the virtual spaces they create. For example, the design decision to put a knife or a banana in a VR social space will affect how users behave. VRChat has virtual presentation rooms, court rooms and stages that prompt users to do anything from singing, to stand-up comedy to prosecuting other users in fake trials. Furthermore, game designers can include in-game mechanisms to empower users to flag inappropriate behavior or mute obnoxious players, a function that exists in VRChat.


Earning a reputation for malfeasance and poor user experience is bad business for VRChat, so the company recently re-envisioned their governance approach. They acknowledge their task in an open letter to their users: “One of the biggest challenges with rapid growth is trying to maintain and shape a community that is fun and safe for everyone. We’re aware there’s a percentage of users that choose to engage in disrespectful or harmful behavior…we’re working on new systems to allow the community to better self-moderate and for our moderation team to be more effective.” The memo detailed where users could provide feedback and ideas to improve VRChat, suggesting that users can be actively involved in the rule-making process.


In Elinor Ostrom’s nobel-prize lecture she criticizes the oft-made assumption that enlightened policymakers or external designers should be the ones “to impose an optimal set of  rules on individuals involved.” Instead, she argued that the self-reflection and creativity of those users within a game could serve “to restructure their own patterns of interaction.” The resulting social norms are a form of governance at the most local level.


Ostrom’s framework demonstrates that good social outcomes emerge through our collective actions, which are influenced by top-down formal rules from platforms and bottom-up norms from users. The goal of stakeholders involved in social VR should be to foster the development of codes of conduct that bring out the best in humanity. Governance in virtual worlds is a process, and players in social spaces have a large role to play. Are you ready for that responsibility, player one?

 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2018 06:56

February 20, 2018

Doomed to fail: “net neutrality” state laws

Internet regulation advocates lost their fight at the FCC, which voted in December 2017 to rescind the 2015 Open Internet Order. Regulation advocates have now taken their “net neutrality” regulations to the states.


Some state officials–via procurement contracts, executive order, or legislation–are attempting to monitor and regulate traffic management techniques and Internet service provider business models in the name of net neutrality. No one, apparently, told these officials that government-mandated net neutrality principles are dead in the US.


As the litigation over the 2015 rules showed, our national laissez faire policy towards the Internet and our First Amendment guts any attempt to enforce net neutrality. Recall that the 1996 amendments to the Communications Act announce a clear national policy about the Internet:


It is the policy of the United States . . . to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.


In fact, that 1996 law was passed in order to encourage ISPs to filter objectionable content.


Further, regulators cannot prevent ISPs from exercising their First Amendment rights to curate the Internet. As Prof. Stuart Minor Benjamin wrote for the Harvard Law Review Forum in 2014,


If we really want to prevent Internet access providers from being speakers, we are going to have to radically reshape the Supreme Court’s First Amendment jurisprudence and understandings.


No radical reshaping of the First Amendment has occurred. For all these reasons, the Obama FCC attorney was forced to concede that


If they [that is, ISPs] filter the Internet . . . the [2015 Open Internet] rules don’t apply to them. 


Even Title II supporters EFF and the ACLU acknowledge in their FCC joint filing that ISPs are speakers who can filter content and escape Title II regulation.



ACLU & EFF throw in the towel—ISPs may block content under the 1st Amend. & are exempt from Title II. #netneutrality https://t.co/xU7uWfPIn4 pic.twitter.com/ZrXjs8bUtS


— Brent Skorup (@bskorup) September 1, 2017



At the end of the day, net neutrality, having lost its original definition, is simply a re-branding of Internet regulation.


State Internet regulations, therefore, are at odds with federal law and policy. Let’s set aside federal preemption for the moment (Seth Cooper explained why preemption likely kills most of these state Internet regulations). There are other arguments for why states can’t impose baby “net neutrality” bills. 


Net neutrality bills likely violate the law


The state “net neutrality” bills and executive orders represent common carriage regulation. State officials make no attempt to hide this since they largely copy-and-paste the nondiscrimination obligations directly from the 2015 Open Internet Order. Here’s the problem for states: regulators can’t impose common carrier obligations on non-common carriers.


When nondiscrimination principles deprive operators of control of content, that amounts to common carriage. This was established in a 1979 Supreme Court case, Midwest Video II. In that case, the Supreme Court struck down common carriage obligations on cable operators, who are non-common carriers. The Court said,


With its access rules, however, the Commission has transferred control of the content of access cable channels from cable operators to members of the public who wish to communicate by the cable medium. …The access rules plainly impose common-carrier obligations on cable operators.


The FCC, the Court said, had no authority to transform them into common carriers.


In fact, this is why the 2010 Open Internet Order was struck down in Verizon v. FCC. There, relying on Midwest Video II, the DC Circuit held that the net neutrality principles couldn’t be enforced on non-common carriers.


State “net neutrality” regulations will likely fail for the same reason. The 2015 rules were upheld because “broadband Internet access service” was classified as a Title II common carrier service. “Broadband Internet access service” providers will no longer be common carriers once the 2017 Restoring Internet Freedom Order takes effect. By imposing common carrier rules on non-common carriers, states run afoul of Midwest Video II and Verizon.


Net neutrality bills balkanize the Internet


State-based Internet regulation is also bad policy, and many who support net neutrality principles–like Google–oppose this legal regime. Internet regulation advocates, by encouraging regulation state-by-state and city-by-city, have finally dispensed with the fiction that “net neutrality” is about the “open Internet.” In their eagerness to have someone, anyone regulate the Internet, these advocates are willing to balkanize the US Internet into dozens, or even hundreds, of splinternets, each with a different local or state regulator.


The Montana governor, for instance, encouraged every state and city to regulate the Internet, even providing a customizable template:



Any city or state can do this. We made you a template: https://t.co/yYgQTWdat1 #NetNeutrality https://t.co/dz3BFiN0uN


— Steve Bullock (@GovernorBullock) January 22, 2018



Further, net neutrality rules are not easy to apply and interpret, particularly the “catch-all” Internet conduct standard. Net neutrality supporters take vastly different stances on identical ISP conduct.


One illustration: the common practice of zero rating by mobile providers. One prominent net neutrality supporter (then-FCC chairman Tom Wheeler) said T-Mobile’s zero rating was “highly innovative and highly competitive.” Another (Prof. Susan Crawford) said it is “anti-competitive,” “dangerous,” and “malignant” and should be ended immediately. There were many advocates in both camps and everywhere in between.


Given the wide divergence of views on a single issue, dozens of “net neutrality” laws would create innumerable contradictions about what is allowed and disallowed online. The fragmented Internet and legal uncertainty would be particularly damaging to small app companies and competitive ISPs, who don’t have hallways of lawyers to ensure compliance, and who use or plan to use traffic priority techniques for gaming, disability services, VoIP, and driverless cars.


For the global, stateless Internet, having state and city CIOs create their own custom Internet regulation interpretations would destroy what made the Internet transformative–a permissionless, global network free of legacy regulations. State legislatures and governors, by ramming through “net neutrality,” are committing to waste countless taxpayer dollars in battling the federal government and telecom companies in (probably unwinnable) litigation. Their “best-case” scenario: a few states win in court and splinter the Internet.


Hopefully cooler heads will prevail and put state energies and treasure into doing something constructive about broadband, like urging reform of the $8.8 billion universal service fund or improving permitting processes and competition.

 •  0 comments  •  flag
Share on Twitter
Published on February 20, 2018 06:31

February 13, 2018

Autonomous Vehicles Aren’t Just Driverless Cars: 5 Thoughts About the Future of Autonomous Buses

Autonomous cars have been discussed rather thoroughly recently and at this point it seems a question of when and how rather than if they will become standard. But as this issue starts to settle, new questions about the application of autonomous technology to other types of transportation are becoming ripe for policy debates. While a great deal of attention seems to be focused on the potential revolutionize the trucking and shipping industries, not as much attention has been paid to how automation may help improve both intercity and intracity bus travel or other public and private transit like trains. The recent requests for comment from the Federal Transit Authority show that policymakers are starting to consider these other modes of transit in preparing for their next recommendations for autonomous vehicles. Here are 5 issues that will need to be considered for an autonomous transit system.




Establish what agency or sub-agency if any has the authority to regulate or guide development of autonomous buses.

Currently, the National Highway Traffic Safety Administration (NHTSA) has provided the most thorough guidance on autonomous vehicles, but it has focused almost exclusively on privately owned, individual transport rather than buses or trucking. Currently buses are regulated by the Federal Transit Administration (FTA), NHTSA, the Federal Motor Carrier Safety Administration, Transportation Security Administration (TSA), and various agencies depending on what particular regulation is being addressed. With the growth of soft law particularly for autonomous vehicles, this overlapping jurisdiction becomes even more complicated for those hoping to start a new driverless bus system.


For example, an innovator hoping to start a driverless bus system from Washington, DC to New York City could have approval from NHTSA for the vehicles safety standards from an informal sandboxing, but find him or herself fighting the TSA after the system was ready due to the intercity travel or state regs in either location. This overlapping jurisdiction at the federal level results in further delay for innovators who may think they have properly consulted necessary agencies or are not required to seek approval.


While evasive entrepreneurs have been able to work within and around such regulations, other times they have had to engage in innovation arbitrage in order to continue such projects or stop development before its fully realized. Yes, Elon Musk might be willing to flip switch on Hyperloop with a verbal yes, but other innovators and investors are less likely to pursue costly projects that regularly face regulatory rejection.


 



Vertical Take Off and Landing (VTOL) may be more transformational than autonomous buses

It’s possible that at some point multi-passenger Vertical Take Off and Landing (VTOL) may actually be more disruptive and take the place of standard buses. These devices are basically drones that can carry human passengers.


Uber, for example, has already announced its plans to test such technology in the relatively near future. Just like we may skip some levels of automation on particular technologies, we may find that we are better off skipping autonomous buses in favor of other technology altogether.


 



State and local governments also have significant impact on buses and that’s not necessarily bad.

Right now a great deal of regulation of both autonomous vehicles and intracity transit is done at a state or local level through restrictions on operations, noise control, and local sanctioned-monopolies. Some of this is because of the increasing difficulty in creating formalized rules or legislation to address disruptive technology at a pace sufficient to keep up with innovation. As Adam Thierer, Ryan Hagemann, and I discuss in our forthcoming paper, this has led to an increased use of soft law at a federal level. It has also opened a window for state and local governments to try new policy solutions to determine what (if any) form of regulation might be best to encourage a disruptive technology like autonomous vehicles. While some economists might argue that every new government is a barrier to efficiency, allowing such local regulations is not in and of itself bad.


If the federal government were to become the new bus czar, it would not likely end well. Not only would cities and states protest the usurpation of their traditional role, but they would lack the local knowledge to determine which tradeoffs to make. Transit would best serve its citizens when they are making the decisions that most directly impact them. While the future may have less strict routes, schedules, and stops through services like Lyft Shuttle even these will require some knowledge of local needs to determine the hours and areas for the most profitable operation.


At the same time, there are real risks that a few powerful cities or states like California or New York City could prevent life-saving innovations like autonomous transit in smaller markets. This could be in a variety of ways from permitting to lane restrictions to funding. Still when examined as a national or even international market, it is likely that innovators would choose to take their technology elsewhere to a market that did exist. For example, following increased regulations related to autonomous vehicle testing in California, Uber moved its autonomous vehicle testing to a more welcome regulatory environment in Arizona. While engaging in such innovation arbitrage is not as easy for an entire transit system, states and cities that are more welcoming or at least willing to work with technological disruptors are more likely to see innovators flock to those areas as well as tangential benefits of allowing such new technology.


 



Smart cities v. dumb choices

In general, it should be applauded that many states and cities are trying to take proactive actions to prepare for potentially transformative changes of driverless cars. However, many of these actions are dumb choices that neither prepare for the change nor promote innovation. As Emily Hamilton has written, “Self-interested incentives may lead policymakers to implement new technologies without making real changes in the quality of service delivery.”


Some of the investment in technological infrastructure has its benefits such as providing data to make infrastructure decision and increased safety and connectedness by enabling more direct communication with citizens. At the same time, many of these projects have been little more than novelties and suffer from the same cronyism issues as other government funded projects. With autonomous vehicles, cities and states may risk betting on the wrong horse and investing in technology that will later be incompatible with the most common product on the market. As Michael and Emily Hamilton have written with the gap between the proposal of legislation and its actual implementation it is easy for the “smart” technologies to be outdated by the time they actually reach citizens.


Still, there are general policy changes that can prepare cities for a smart future. Adam Thierer has written about three policy proposals (an Innovator’s Presumption, a Sunsetting Imperative, and a Parity Provision) that would enable policymakers to create cities that embraced innovation. These proposals rather than targeting specific technologies would create a regulatory environment that encourages experimentation and innovation in a variety of industries.


 



Concerns about the impact of autonomous buses are well intentioned, but typically more about incumbents maintaining their market share.

As Michael Farren and I wrote about the collective freakout in Oregon about having to pump your own gases, often technopanics overlap with an imbedded cronyism or incumbents trying to keep out new entrants through bootleggers and Baptists phenomena.


Sadly, this phenomena is starting emerge when discussions about autonomous buses appear. Unions in some cities like Columbus, Ohio have public voiced their opposition if the jobs of current operators would be impacted. While job loss is a sad event, new technologies do not merely appear overnight and bring with them new job opportunities. Attempts by unions and other advocates to prevent any potential job losses from autonomous vehicles, could cost hundreds of thousands of lives including those of bus and truck drivers. Delaying a life-saving technology because it may negatively impact a few when it could benefit a large number in most cases is not a desirable tradeoff. Policymakers and advocates must realize that there will always be tradeoffs and recognize that often a small loss is necessary for a larger gain.


Technology does not just destroy jobs, it also creates them. A 2015 Deloitte study found that in the 140 years since the industrial revolution new technology had created more jobs than it had destroyed and not just in areas directly related to the technology. As individuals had more free time because technology made things like agriculture and manufacturing easier, significant growth was experienced not only in jobs related directly to technology, but also service and creative industries. While cars may have unemployed blacksmiths, they provided new opportunities for many others by creating and expanding new industries. It is likely that the current disruptive technologies will do the same.


 


 


As both the technology and policy surrounding autonomous vehicles evolves these and many other issues will have to be discussed and decided. It is a welcome event that such conversations are beginning to embrace the broader applications of the technology rather than solely focusing on “driverless cars” and hopefully this expanded focus will allow for even greater innovation and benefits.

 •  0 comments  •  flag
Share on Twitter
Published on February 13, 2018 08:19

February 9, 2018

The FCC’s new Office of Economics and Analytics and the public interest

Last week the FCC commissioners voted to restructure the agency and create an Office of Economics and Analytics. Hopefully the new Office will give some rigor to the “public interest standard” that guides most FCC decisions. It’s important the FCC formally inject economics in to public interest determinations, perhaps much like the Australian telecom regulator’s “total welfare standard,” which is basically a social welfare calculation plus consideration of “broader social impacts.”


In contrast, the existing “standard” has several components and subcomponents (some of them contradictory) depending on the circumstances; that is, it’s no standard at all. As the first general counsel of the Federal Radio Commission, Louis Caldwell, said of the public interest standard, it means


as little as any phrase that the drafters of the Act could have used and still comply with the constitutional requirement that there be some standard to guide the administrative wisdom of the licensing authority.


Unfortunately, this means public interest determinations are largely shielded from serious court scrutiny. As Judge Posner said of the standard in Schurz Communications v. FCC,


So nebulous a mandate invests the Commission with an enormous discretion and correspondingly limits the practical scope of responsible judicial review.


Posner colorfully characterized FCC public interest analysis in that case:


The Commission’s majority opinion … is long, but much of it consists of boilerplate, the recitation of the multitudinous parties’ multifarious contentions, and self-congratulatory rhetoric about how careful and thoughtful and measured and balanced the majority has been in evaluating those contentions and carrying out its responsibilities. Stripped of verbiage, the opinion, like a Persian cat with its fur shaved, is alarmingly pale and thin.


Every party who does significant work before the FCC has agreed with Judge Posner’s sentiments at one time or another.


Which brings us to the Office of Economics and Analytics. Cost-benefit analysis has its limits, but economic rigor is increasingly important as the FCC turns its attention away from media regulation and towards spectrum assignment and broadband subsidies.


The worst excesses of FCC regulation are in the past where, for instance, one broadcaster’s staff in 1989 “was required to review 14,000 pages of records to compile information for one [FCC] interrogatory alone out of 299.” Or when, say, FCC staff had to sift through and consider 60,000 TV and radio “fairness” complaints in 1970. These regulatory excesses were corrected by economists (namely, Ronald Coase’s recommendation that spectrum licenses be auctioned, rather than given away for free by the FCC after a broadcast “beauty contest” hearing), but history shows that FCC proceedings spiral out of control without the agency intending it.


Since Congress gave such a nebulous standard, the FCC is always at risk of regressing. Look no further than the FCC’s meaningless “Internet conduct standard” from its 2015 Open Internet Order. This “net neutrality” regulation is a throwback to the bad old days, an unpredictable conduct standard that–like the Fairness Doctrine–would constantly draw the FCC into social policy activism and distract companies with interminable FCC investigations and unknowable compliance requirements.


In the OIO’s mercifully short life, we saw glimpses of the nonsense that would’ve distracted the agency and regulated companies. For instance, prominent net neutrality supporters had wildly different views about whether a common practice, “zero rating” of IP content, by T-Mobile violated the Internet conduct standard. Chairman Tom Wheeler initially called it “highly innovative and highly competitive” while Harvard professor Susan Crawford said it was “dangerous” and “malignant” and should be outlawed “immediately.” The nearly year-long FCC investigations into zero rating and the equivocal report sent a clear, chilling message to ISPs and app companies: 20 years of permissionless innovation for the Internet was long enough. Submit your new technologies and business plans to us or face the consequences.


Fortunately, by rescinding the 2015 Order and creating the new economics Office, Chairman Pai and his Republican colleagues are improving the outlook for the development of the Internet. Hopefully the Office will make social welfare calculations a critical part of the public interest standard.

 •  0 comments  •  flag
Share on Twitter
Published on February 09, 2018 10:32

February 6, 2018

What Do We Mean by Technological “Moonshots”? And Why Should We Care about Them?

We hear a lot these days about “technological moonshots.” It’s an interesting phrase because the meaning of both words in it are often left undefined. I won’t belabor the point about how people define–or, rather, fail to define–“technology” when they use it. I’ve already spent a lot of time writing about that problem. See, for example, this constantly updated essay here about “Defining ‘Technology.'” It’s a compendium I began curating years ago that collects what dozens of others have had to say on the matter. I’m always struck by how many different definitions are out there that I keep unearthing.


The term “moonshots” has a similar problem. The first meaning is the literal one that hearkens back to President Kennedy’s famous 1962 “we choose to go to the moon” speech. That use of the terms implies large government programs and agencies, centralized control, and top-down planning with a very specific political objective in mind. Increasingly, however, the term “moonshot” is used more generally, as I note in this new Mercatus essay about “Making the World Safe for More Moonshots.”  My Mercatus Center colleague Donald Boudreaux has referred to moonshots as, “radical but feasible solutions to important problems,” and  Mike Cushing of Enterprise Innovation defines a moonshot as an “innovation that achieves the previously unthinkable.” I like that more generic use of the term and think it could be used appropriately when discussing the big innovations many of us hope to see in fields as diverse as quantum computing, genetic editing, AI and autonomous systems, supersonic transport, and much more. I still have some reservations about the term, but I think it’s definitely a better term than “disruptive innovation,” which is also used differently by various scholars and pundits.




Regardless of what we call them, “We Need Large Innovations,” as as entrepreneurship zealot Vinod Khosla argues in a recent essay. Why? Because as I point out in my new essay:


we should push for more moonshots because there is a profoundly positive correlation between innovation and human prosperity. Countless economic studies and historical surveys have documented the symbiotic relationship among technological progress, economic growth, and improvement of overall social welfare. Big innovations spawn big gains for society in the form of more choices, greater mobility, increased wealth, better health, and longer lifespans.


I hope to build on this point in a forthcoming paper and eventually in a new book. Big innovations–whether we call them “moonshots” or whatever else–pay big dividends for society.


Consequently, getting innovation policy right is essential because, as the great economic historian Joel Mokyr has shown, technological innovation and economic progress must be viewed as “a fragile and vulnerable plant, whose flourishing is not only dependent on the appropriate surroundings and climate, but whose life is almost always short. It is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.” Thus, like a plant we wish to grown, we must constantly nurture our innovation policy environment if we hope to grow and prosper as a society. We cannot rest on our past successes. “What matters is the successful striving for what at each moment seems unattainable,” said  F. A. Hayek in The Constitution of Liberty. “It is not the fruits of past success but the living in and for the future in which human intelligence proves itself,” he rightly concluded.

 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2018 12:24

January 29, 2018

Nationalizing 5G networks? Why that’s a bad idea.

There was a bold, bizarre proposal published by Axios yesterday that includes leaked documents by a “senior National Security Council official” for accelerating 5G deployment in the US. “5G” refers to the latest generation of wireless technologies, whose evolving specifications are being standardized by global telecommunications companies as we speak. The proposal highlights some reasonable concerns–the need for secure networks, the deleterious slowness in getting wireless infrastructure permits from thousands of municipalities and counties–but recommends an unreasonable solution–a government-operated, nationwide wireless network.


The proposal to nationalize some 5G equipment and network components needs to be nipped in the bud. It relies on the dated notion that centralized government management outperforms “wasteful competition.” It’s infeasible and would severely damage the US telecom and Internet sector, one of the brightest spots in the US economy. The plan will likely go nowhere but the fact it’s being circulated by administration officials is alarming.


First, a little context. In 1927, the US nationalized all radiofrequency spectrum, and for decades the government rations out dribbles of spectrum for commercial use (though much has improved since liberalization in the 1990s). To this day all spectrum is nationalized and wireless companies operate at sufferance. What this new document proposes is to make a poor situation worse.


In particular, the presentation proposes to re-nationalize 500 MHz of spectrum (the 3.7 GHz to 4.2 GHz band, which contains mostly satellite and government incumbents) and build wireless equipment and infrastructure across the country to transmit on this band. The federal government would act as a wholesaler to the commercial networks (AT&T, Verizon, T-Mobile, Sprint, etc.), who would sell retail wireless plans to consumers and businesses.


The justification for nationalizing a portion of 5G networks has a national security component and an economic component: prevent Chinese spying and beat China in the “5G race.”


The announced goals are simultaneously broad and narrow, and at severe tension.


The plan is broad in that it contemplates nationalizing part of the 5G equipment and network. However, it’s narrow in that it would nationalize only a portion of the 5G network (3.7 GHz to 4.2 GHz) and not other portions (like 600 MHz and 28 GHz). This undermines the national security purpose (assuming it’s even feasible to protect the nationalized portion) since 5G networks interconnect. It’d be like having government checkpoints on Interstate 95 but leaving all other interstates checkpoint-free.


Further, the document author misunderstands the evolutionary nature of 5G networks. 5G for awhile will be an overlay on the existing 4G LTE network, not a brand-new parallel network, as the NSC document assumes. 5G equipment will be installed on 4G LTE infrastructure in neighborhoods where capacity is strained. As Sherif Hanna, director of the 5G team at Qualcomm, noted on Twitter, in fact, “the first version of the 5G [standard]…by definition requires an existing 4G radio and core network.”



Just to be completely clear: the first version of the 5G NR standard, which is called "NSA" (Non-Standalone), by definition requires an existing 4G radio and core network. One cannot simply build out a complete NSA 5G network without having a 4G network already in place.


— Sherif Hanna

 •  0 comments  •  flag
Share on Twitter
Published on January 29, 2018 09:49

January 25, 2018

Clinton’s “Progressive” Tech Policy Still Wise Today

Co-authored with Adam Thierer


Why would progressives abandon the most successful progressive technology policy ever formulated?


In a recent piece in The Washington Spectator, Marc Rotenberg and Larry Irving have some harsh words for progressives’ supposed starry-eyed treatment of Internet firms and the Clinton Administration policies that helped give rise to the modern digital economy. They argue that the Internet has failed to live up to its promise in part because “[p]rogressive leaders moved away from progressive values on tech issues, and now we live with the consequences.”


But if the modern Internet we know today is truly the result of progressive’s self-repudiation, then we owe them and the Clinton Administration a debt of gratitude, not a lecture.


Unfortunately, Rotenberg and Irving take a different perspective. They criticize progressives for standing aside while “a new mantra of ‘multistakeholder engagement’” replaced traditional regulatory governance structures, unleashing a Pandora’s Box of “self-regulatory processes” that failed to keep the private sector accountable to the public.


Rotenberg and Irving are also upset that the First Amendment rights of Internet companies have received stronger support following the implementation of Section 230 of the Communications Decency Act, which was enacted by Congress in 1996 and signed into law by President Clinton as part of the Telecommunications Act of 1996.


All of this could have been avoided, they argue, if the Clinton Administration had instead embraced the creation of a National Information Infrastructure (NII) to govern the Internet. As part of its 1993 proposed “Agenda for Action,” the Clinton White House toyed with the idea that “[d]evelopment of the NII can help unleash an information revolution that will change forever the way people live, work, and interact with each other,” citing specific examples of how it would: empower people to “live almost anywhere they wanted, without foregoing opportunities for useful and fulfilling employment”; make education “available to all students, without regard to geography, distance, resources, or disability”; and permit healthcare and other social needs to be delivered “on-line, without waiting in line, when and where you needed them.” Luckily, all these things came to pass precisely because the Clinton Administration went a different route, ignoring the heavy-handed regulatory approach offered by early tech policy wonks and opting instead to embrace a different governance framework: The Framework for Global Electronic Commerce.


The 1997 Framework outlined a succinct, market-oriented vision for the Internet and the emerging digital economy. It envisioned a model of cyberspace governance that relied on multistakeholder collaboration and ongoing voluntary negotiations and agreements to find consensus on the new challenges of the information age. Policy was to be formulated in an organic, bottom-up, and fluid fashion. This was a stark and welcome break from the failed top-down technocratic regulatory regimes of the analog era, which had long held back innovation and choice in traditional communications and media sectors.


“Where governmental involvement is needed,” The Framework advised, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” The result was one of the most amazing explosions in innovation our nation and, indeed, the entire world had ever witnessed. It was precisely the flexibility of multistakeholder governance—as well as the strong support for the free flow of speech and commerce—that unleashed this tsunami of technological progress.  


It’s strange, then, that Rotenberg and Irving decry the era of “multistakeholder engagement” that the Clinton Administration Framework presaged, especially because they included similar provisions in their own frameworks. For example, in “A Public-Interest Vision of the National Information Infrastructure,” the authors specifically called for “democratic policy-making” in the governance of the emerging Internet, arguing that “[t]he public should be fully involved in policy-making for the information infrastructure.” They go even further by citing the value of “participatory design,” which emphasized iterative experimentation and information feedback loops (learning by doing) in the process of designing network standards and systems. These “[n]ew approaches,” Rotenberg and Irving argue, “combine the centralized and decentralized models, obtaining the benefits of each while avoiding their deficiencies.” Embracing “[b]oth participatory design and the experimental approach to standardization,” they concluded, would “achieve the benefits of democratic input to design and policy-making without sacrificing the technical advantages of consistency and elegance of design.”


On this point, Rotenberg and Irving are correct. Unfortunately, it seems their valuation of such processes do not apply to the regulatory structures overseeing these technologies. This is despite the “Agenda for Action” explicitly calling for the NII to “complement … the efforts of the private sector” by “work[ing] in close partnership with business, labor, academia, the public, Congress, and state and local government.” What’s more “multistakeholder” than that?


For all their lamentations of the multistakeholder process, Rotenberg and Irving engaged in that very process in the 1990s. Their proposals had their shot at convincing the Clinton Administration that a national regulatory agency governing the Internet was necessary to usher in the digital age. And in one of those ironic twists of history, they failed to get their agency, but nevertheless bore witness to the emergence of a free and open Internet where innovation and progress still flourish.


We shouldn’t lose sight of this miraculous achievement and the public policies that made it all possible. There’s nothing “progressive” about rolling back the clock in the way Rotenberg and Irving recommend. Instead, America should double-down on the Clinton Administration’s vision for innovation policy by embracing permissionless innovation, collaborative multistakeholderism, and strong support for freedom of speech as the cornerstones of public policy toward other emerging technologies and sectors.

 •  0 comments  •  flag
Share on Twitter
Published on January 25, 2018 12:22

January 9, 2018

A welcome restructuring at the FCC

The FCC released a proposed Order today that would create an Office of Economics and Analytics. Last April, Chairman Pai proposed this data-centric office. There are about a dozen bureaus and offices within the FCC and this proposed change in the FCC’s organizational structure would consolidate a few offices and many FCC economists and experts into a single office.


This is welcome news. Several years ago when I was in law school, I was a legal clerk for the FCC Wireless Bureau and for the FCC Office of General Counsel. During that ten-month stint, I was surprised at the number of economists, who were all excellent, at the FCC. I assisted several of them closely (and helped organize what one FCC official dubbed, unofficially, “The Economists’ Cage Match” for outside experts sparring over the competitive effects of the proposed AT&T-T-Mobile merger). However, my impression even during my limited time at the FCC was well-stated by Chairman Pai in April:


[E]conomists are not systematically incorporated into policy work at the FCC. Instead, their expertise is typically applied in an ad hoc fashion, often late in the process. There is no consistent approach to their use.


And since the economists are sprinkled about the agency, their work is often “siloed” within their respective bureau. Economics as an afterthought in telecom is not good for the development of US tech industries, nor for consumers.


As Geoffrey Manne and Allen Gibby said recently, “the future of telecom regulation is antitrust,” and the creation of the OEA is a good step in line with global trends. Many nations–like the Netherlands, Denmark, Spain, Japan, South Korea, and New Zealand–are restructuring legacy telecom regulators. The days of public and private telecom monopolies and discrete, separate communications, computer, and media industries (thus bureaus) is past. Convergence, driven by IP networks and deregulation, has created these trends and resulted in sometimes dramatic restructuring of agencies.


In Denmark, for instance, as Roslyn Layton and Joe Kane have written, national parties and regulators took inspiration from the deregulatory plans of the Clinton FCC. The Social Democrats, the Radical Left, the Left, the Conservative People’s Party, the Socialist People’s Party, and the Center Democrats agreed in 1999:


The 1990s were focused on breaking down old monopoly; now it is important to make the frameworks for telecom, IT, radio, TV meld together—convergence. We believe that new technologies will create competition.


It is important to ensure that regulation does not create a barrier for the possibility of new converged products; for example, telecom operators should be able to offer content if they so choose. It is also important to ensure digital signature capability, digital payment, consumer protection, and digital rights. Regulation must be technologically neutral, and technology choices are to be handled by the market. The goal is to move away from sector-specific regulation toward competition-oriented regulation. We would prefer to handle telecom with competition laws, but some special regulation may be needed in certain cases—for example, regulation for access to copper and universal service.


This agreement was followed up by the quiet shuttering of NITA, the Danish telecom agency, in 2011.


Bringing economic rigor to the FCC’s notoriously vague “public interest” standard seemed to be occurring (slowly) during the Clinton and Bush administrations. However, during the Obama years, this progress was de-railed, largely by the net neutrality silliness, which not only distracted US regulators from actual problems like rural broadband expansion but also reinvigorated the media-access movement, whose followers believe the FCC should have a major role in shaping US culture, media, and technologies.


Fortunately, those days are in the rearview mirror. The proposed creation of the OEA represents another pivot toward the likely future of US telecom regulation: a focus on consumer welfare, competition, and data-driven policy.

 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2018 12:45

January 2, 2018

The Top 10 Tech Liberation Posts in 2017

Technology policy has made major inroads into a growing number of fields in recent years, including health care, labor, and transportation, and we at the Technology Liberation Front have brought a free-market lens to these issues for over a decade. As is our annual tradition, below are the most popular posts* from the past year, as well as key excerpts.


Enjoy, and Happy New Year.


10. Thoughts on “Demand” for Unlicensed Spectrum


Unlicensed spectrum is a contentious issue because the FCC gives out this valuable spectrum for free to device companies. The No. 10 most-read piece in 2017 was my January commentary on the proposed Mobile Now Act. In particular, I was alarmed at some of the vague language encouraging unlicensed spectrum.


Note that we have language about supply and demand here [in the bill]. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely.


Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum.


9. The FCC’s Misguided Paid Priority Ban


Net neutrality has been generating clicks for over a decade and there was plenty of net neutrality news in 2017. In April, I explained why regulating and banning “paid priority” agreements online is damaging to the Internet.


The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links.


The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand.


Happily, a few months after this post was published the Trump FCC, led by Chairman Pai, eliminated the intrusive 2015 Internet regulations, including the “paid priority ban.”


8. Who needs a telecom regulator? Denmark doesn’t.


In March, the Mercatus Center published a case study by Roslyn Layton, a Trump transition team member, and Joe Kane about Denmark’s successful telecom reform since the 1990s. I summarized the paper for readers after it was published.


Layton and Kane explore Denmark’s relatively free-market telecom policies. They explain how Denmark modernized its telecom laws over time as technology and competition evolved. Critically, the center-left government eliminated Denmark’s telecom regulator in 2011 in light of the “convergence” of services to the Internet. Scholars noted,


“Nobody seemed to care much—except for the staff who needed to move to other authorities and a few people especially interested in IT and telecom regulation.”


Even-handed, light telecom regulation performs pretty well. Denmark, along with South Korea, leads the world in terms of broadband access. The country also has a modest universal service program that depends primarily on the market. Further, similar to other Nordic countries, Denmark permitted a voluntary forum, including consumer groups, ISPs, and Google, to determine best practices and resolve “net neutrality” controversies.


This fascinating Layton-Kane case study inspired a November event in DC about the future of US telecom law featuring FCC Chairman Ajit Pai and former Danish regulator Jakob Willer.


7. Shouldn’t the Robots Have Eaten All the Jobs at Amazon By Now?


Artificial intelligence and robotics are advancing rapidly but no one is certain what the effects will be for American labor markets. In July, Adam looked at Amazon’s incorporation of robots and urged scholars and policymakers to resist the doomsayers who predict crushing unemployment.


The reality is that we suffer from a serious poverty of imagination when it comes to thinking about the future, and future job opportunities in particular. …Old jobs and skills are indeed often replaced by mechanization and new technological processes. But that in turn opens the door to people to take on new opportunities — often in new sectors and new firms, but sometimes even within the same industries and companies. And because human needs and wants are essentially infinite, this process just goes on and on and on as we search for new and better ways of doing things. And that’s how, in the long run, robots and automation are actually employment-enhancing rather than employment-reducing.


6. Does “Permissionless Innovation” Even Mean Anything?


Adam spoke at an Arizona State University conference in May about emerging technologies and published his remarks at Tech Liberation. He commented on the rise of “soft law” for government oversight of tech-infused, fast-moving industries.


That is, there seemed to be some grudging acceptance on both our parts that “soft law” systems, multistakeholder processes, and various other informal governance mechanisms will need to fill the governance gap left by the gradual erosion of hard law.


Many other scholars, including many of you in this room, have discussed the growth of soft law mechanisms in specific contexts, but I believe we have probably failed to acknowledge the extent to which these informal governance models have already become the dominant form of technological governance, at least in the United States.


5. Book Review: Garry Kasparov’s “Deep Thinking”


In May, Adam reviewed Garry Kasparov’s new book about AI, describing it as a “welcome breath of fresh air” in a genre often devoted to generating technopanics.


Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.


…Kasparov suggests that there are lessons for us in the history of chess as well as from his own experience competing against Deep Blue. He notes that his match against IBM’s supercomputer, “was symbolic of how we are in a strange competition both with and against our creation in more ways every day.”


Instead of just throwing our hands up in the air in frustration, we must be willing to embrace the new and unknown — especially AI and machine-learning.


4. Remember What the Experts Said about the Apple iPhone 10 Years Ago?


2017 marked the ten-year anniversary of the release of the first iPhone. Adam took a look back at some of the predictions made when the groundbreaking device first hit stores.


A decade after these predictions were made, Motorola, Nokia, Palm, and Blackberry have been decimated by the rise of Apple as well as Google (which actually purchased Motorola in the midst of it all). And Microsoft still struggles with mobile even though they are still a player in the field. Rarely have Joseph Schumpeter’s “perennial gales of creative destruction” blown harder than they have in the mobile sector over this 10 year period.


3. 4 Ways Technology Helped During Hurricanes Harvey and Irma (and 1 more it could have)


Jennifer Huddleston Skees joined our team in 2017 and September wrote the No. 3 most-popular post of the year about how technology is aiding disaster relief.


Technology is changing the way we respond to disasters and assisting with relief efforts. As Allison Griswold writes at Quartz, this technology enabled response has redefined how people provide assistance in the wake of disaster. We cannot plan how such technology will react to difficult situations or the actions of such platforms users, but the recent events in Florida and Texas show it can enable us to help one another even more. The more technology is allowed to participate in a response, the better it enables people to connect to those in need in the wake of disaster.


2. Some background on broadband privacy changes


Hyperbole, misinformation, and worse is amplified in too many news stories and Facebook feeds whenever Republicans undo an Obama FCC priority. Early in 2017 Congress and President Trump decided to use the rarely-used Congressional Review Act process to repeal broad Internet privacy regulations passed by the Obama FCC in 2016. My explainer about what was really going on (No, ISPs are not selling your SSNs and location information without your permission.) was the No. 2 story of the year.


Considering that these notice and choice rules have not even gone into effect, the rehearsed outrage from advocates demands explanation: The theatrics this week are not really about congressional repeal of the (inoperative) privacy rules. Two years ago the FCC decided to regulate the Internet in order to shape Internet services and content. The leading advocates are outraged because FCC control of the Internet is slipping away. Hopefully Congress and the FCC will eliminate the rest of the Title II baggage this year.


1. Here’s why the Obama FCC Internet regulations don’t protect net neutrality


There are plenty of myths about the 2015 “net neutrality” Order. Fortunately, many people out there are skeptical of the conventional narrative surrounding net neutrality. My post from July about the paper-thin net neutrality protections in the 2015 Order saw new life in November and December when the Trump FCC released a proposal to repeal the 2015 Order. Driven by the theatrics by those opposing the December 2017 Restoring Internet Freedom Order (and a Mark Cuban retweet), this post came from behind to be the most-read Technology Liberation post of the year.


The 2016 court decision upholding the rules was a Pyrrhic victory for the net neutrality movement. In short, the decision revealed that the 2015 Open Internet Order provides no meaningful net neutrality protections–it allows ISPs to block and throttle content. As the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.”


No one knows what 2018 has in store for technology policy, but your loyal TLF bloggers are preparing for driverless car technology, cybersecurity, spectrum policy, and more.


Stay tuned, and thanks for reading.


 


*Excepting the most-read post, which was a 2017 update to a 2014 post from Adam about the definition of technology.

 •  0 comments  •  flag
Share on Twitter
Published on January 02, 2018 12:54

How to Sell a Book about Tech Policy: Turn the Technopanic Dial Up to 11

Reason magazine recently published my review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. My review begins as follows:


If you want to sell a book about tech policy these days, there’s an easy formula to follow.


First you need a villain. Google and Facebook should suffice, but if you can throw in Apple, Amazon, or Twitter, that’s even better. Paint their CEOs as either James Bond baddies bent on world domination or naive do-gooders obsessed with the quixotic promise of innovation.



Then you repackage some old chestnuts about commercialism or false consciousness. Add a dash of pop psychology and behavioral economics. Be sure to include a litany of woes about cognitive overload and social isolation.

Finally, come up with a juicy Chicken Little title. Maybe something like World Without Mind: The Existential Threat of Big Tech. Wait—that one’s taken. It’s the title of Franklin Foer’s latest book, which follows this familiar techno-panic template almost perfectly.


The book doesn’t break a lot of new ground; it serves up the same old technopanicky tales of gloom-and-doom that many others have said will befall us unless something is done to save us. But Foer’s unique contribution is to unify many diverse strands of modern tech criticism in one tome, and then amp up the volume of panic about it all. Hence, the “existential” threat in the book’s title. I bet you didn’t know the End Times were so near!


Read the rest of my review over at Reason. And, if you care to read some of my other essays on technopanics through the ages, here’s a compendium of them.

 •  0 comments  •  flag
Share on Twitter
Published on January 02, 2018 08:34

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.