Adam Thierer's Blog, page 24

August 16, 2018

The Pacing Problem, the Collingridge Dilemma & Technological Determinism

I recently posted an essay over at The Bridge about “The Pacing Problem and the Future of Technology Regulation.” In it, I explain why the pacing problem—the notion that technological innovation is increasingly outpacing the ability of laws and regulations to keep up—“is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”


In this follow-up article, I wanted to expand upon some of the themes developed in that essay and discuss how they relate to two other important concepts: the “Collingridge Dilemma” and technological determinism. In doing so, I will build on material that is included in a forthcoming law review article I have co-authored with Jennifer Skees, Ryan Hagemann (“Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future”) as well as a book I am finishing up on the growth of “evasive entrepreneurialism” and “technological civil disobedience.”


Recapping the Nature of the Pacing Problem

First, let us quickly recap that nature of “the pacing problem.” I believe Larry Downes did the best job explaining the “problem” in his 2009 book on The Laws of Disruption. Downes argued that “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this “law” was becoming “a simple but unavoidable principle of modern life.”


Downes was generally a cheerleader for such developments. For him, the pacing problem is more like the pacing benefit. But Downes is in the minority among most tech policy scholars in this regard. In the field of Science and Technology Studies (STS), discussions about the pacing problem and what to do about it are omnipresent and full of foreboding gloominess.


STS is a broad field of interdisciplinary studies unified by a concern with “the impacts and control of science and technology, with particular focus on the risks, benefits and opportunities that S&T may pose” to a wide range of values. STS studies incorporates many disciplines: legal and philosophical studies, sociology, anthropology, engineering, and others. In countless essays, papers, journal articles, and books, STS scholars lament the pacing problem and often insist something must be done, often without ever getting around to explaining what that something is.



Regardless of their field of study, there is broad recognition among these scholars that new technological, social, and political realities make the pacing problem a phenomenon worth studying.  In my Bridge essay, I identified three primary drivers of the pacing problem:



Technological driver: The power of “combinatorial innovation,” which is driven by “Moore’s Law,” fuels a constant expansion of technological capabilities.
Social driver: As citizens quickly assimilate new tools into their daily lives and then expect that even more and better tools will be delivered tomorrow.
Political driver: Government has grown increasingly dysfunctional and unable to adapt to those technological and social changes.

The “Collingridge Dilemma”

Although they do not always refer to it by name, STS scholars regularly stress the so-called “Collingridge dilemma” in their work. The Collingridge dilemma refers to the extreme difficulty of putting proverbial genies back in their bottles once a given technology has reached a certain inflection point in society. The concept is named after David Collingridge, who wrote about the challenges of governing emerging technologies in his 1980 book, The Social Control of Technology.


“The social consequences of a technology cannot be predicted early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.” He called this the “dilemma of control,” and asserted that, “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming.”


In a sense, the “Collingridge dilemma” is simply a restatement of the pacing problem but with (1) greater stress on the social drivers behind the pacing problem and, (2) an implicit solution to “the problem” in the form of preemptive control of new technologies while they are still young and more manageable.


Specifically, for many STS scholars, Collingridge’s “dilemma” is preferably solved through the application of the Precautionary Principle. The contours of the Precautionary Principle are notoriously murky and ill-defined. Nonetheless, as I discussed a great length in my last book on the subject, the Precautionary Principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.


You can see the logic of the Collingridge dilemma and the Precautionary Principle at work everywhere in STS scholarship today. Few scholars want to admit they favor the Precautionary Principle, however, so they often use different terminology. “Anticipatory governance” or “upstream governance” are the preferred terms of art these days.


For example, in a recent law review article about “Regulating Disruptive Innovation,” Nathan Cortez argues that “new technologies can benefit from decisive, well-timed regulation” or even “early regulatory interventions.” Similarly, writing in Slate in 2014, John Frank Weaver insisted we should regulate emerging tech like artificial intelligence “early and often” to “get out ahead of” various social and economic concerns.


In his last book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control, bioethicist Wendell Wallach also argued for new forms of upstream governance and defined it as a system that allow for “more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched, or something major has already gone wrong,” he argued. Wallach is basically just restating the Collingridge dilemma in this regard.


The problem with all these calls for the anticipatory or upstream governance solutions to the pacing problem and the Collingridge dilemma is that, like the Precautionary Principle more generally, the specific solutions are very incoherent or sometimes completely lacking. STS scholars almost always leave the reader hanging without offering a conclusion to their gloomy, pessimistic narratives about whatever technology or technological process it is they are critiquing. Critics are quick to issue bold calls-to-action, but rarely provide a detailed blueprint.


There are some exceptions. Some STS scholars have advocated for Precautionary Principle-minded legislation or agencies, like an “Artificial Intelligence Development Act,” a “National Algorithmic Technology Safety Administration” or a federal AI agency, such as a “Federal Robotics Commission.” Meanwhile, over the past decade, many STS scholars have pushed for national privacy and cybersecurity legislation, or expansive new forms of liability for technology companies. The regulatory authority sought in these cases would be squarely precautionary in character, aimed at addressing a wide array of hypothetical harms through permissioned-based rulemaking before those problems even materialize.


Technological Determinism?

Discussions about the pacing problem and the Collingridge dilemma have an air of technological determinism to them. Technological determinism generally refers to the notion that technology almost has a mind of its own and that it will plow forward without much resistance from society or governments. Here is a more scholarly definition from Sally Wyatt, who has explained how technological determinism is generally defined in a two-part fashion:


The first part is that technological developments take place outside society, independently of social, economic, and political forces. New or improved products or ways of making things arise from the activities of inventors, engineers, and designers following an internal, technical logic that has nothing to do with social relationships. The more crucial second part is that technological change causes or determines social change.


The opposite of technological determinism is usually referred to as “social constructivism,” which as Thomas Hughes notes, “presumes that social and cultural forces determine technical change.”


Ironically, among STS scholars, technological determinist reasoning is both (a) regularly on display, and (b) generally reviled. That is, many STS scholars speaking in deterministic tones about the inevitability of certain technological developments, but then they effortlessly shift into social constructivist mode when commenting on what they hope to do about it.


One of the most well-known technology critics of the past century was French philosopher Jacques Ellul. It is impossible to read his tracts and not find deterministic reasoning flying off every other page. He argued, for example, that technology is “self-perpetuating, all-persuasive, and inescapable,” and that it represents “an autonomous and uncontrollable force that dehumanized all that it touches.” Moreover, within the field of Marxist studies, technological determinism is ubiquitous. Of course, that goes back to Marx himself and his many ideological descendants, who held strongly deterministic views about the role of technology played in sharping history and socio-political systems. Plenty of other STS scholars remain hard-core social constructivist, however, and insist that dealing with the pacing problem and the Collingridge dilemma really just comes down to a matter of sheer social and political willpower.


Techno-determinist thinking is usually on display in more vivid terms among technological optimists. Reading the writings of futurists like Ray Kurzweil and Kevin Kelly, one cannot help but get the sense that they are pining for the day when we are all just assimilated into The Matrix. There is an air of utter futility associated with humanity’s efforts to resist the spread of various technological systems and processes. Philosopher Michael Sacasas refers to this mentality as “the Borg Complex,” which, he says, is often “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.”


The point I am trying to make here is that technological determinism is at work in all sorts of scholarship and punditry. Regardless of whether one subscribes to what Ian Barbour has labelled the warring viewpoints of “Technology as Liberator” or “Technology as a Threat,” very different people can hold strongly deterministic viewpoints.


Soft Determinism

The problem with all this talk about determinism—technological, social, political, or whatever—is that the lines are never quite as bright as some suggest. “Hard” determinism of any of these varieties simply cannot be correct. We have too many historical examples that run counter to both narratives.


Personally, I’ve always subscribed to what some refer to as “soft technological determinism.” Technological historian Merritt Roe Smith defines “soft determinism” as the view “which holds that technological change drives social change but at the same time responds discriminatingly to social pressures,” as compared to “hard determinism,” which “perceives technological development as an autonomous force, completely independent of social constraints.”


Konstantinos Stylianou has offered a variant of soft determinism that zeroes in on better understanding the unique attributes of specific technologies and political systems when considering how difficult they may be to control. He argues that “there are indeed technologies so disruptive that by their very nature they cause a certain change regardless of other factors,” such as the Internet. Stylianou concludes that:


It seems reasonable to infer that the thrust behind technological progress is so powerful that it is almost impossible for traditional legislation to catch up. While designing flexible rules may be of help, it also appears that technology has already advanced to the degree that is able to bypass or manipulate legislation. As a result, the cat-and-mouse chase game between the law and technology will probably always tip in favor of technology. It may thus be a wise choice for the law to stop underestimating the dynamics of technology, and instead adapt to embrace it.


That may sound like just more hard deterministic thinking, but it represents a softer variety that holds that the special characteristics of some technologies are indeed altering our capacity to govern many newer sectors using traditional regulatory mechanisms. In my new law review article with Jennifer Skees and Ryan Hagemann, we conclude that this is the key factor motivating the gradual move away from “hard law” and toward “soft law” governance tools for a great many emerging technologies.


To be clear, this does not mean we are going to soon reach the proverbial “end of politics” or the “death of the nation-state” due to technology, or anything like that. As I point out in my forthcoming book, that sort of talk is silly. Some technology enthusiasts or libertarians use techno-determinist talk as if they are preaching a gospel of liberation theology—liberation from the state through technology emancipation, that is.


In reality, technology giveth and technology taketh away. Technology can empower people and institutions and help them challenge laws, regulations, and entire political systems. My forthcoming book documents how many “evasive entrepreneurs” are doing just that today, and with increasing regularity. But technology empowers government actors, too. In an unpublished 2009 manuscript entitled, “Does Technology Drive the Growth of Government?” my Mercatus Center colleague Tyler Cowen noted how growth of big government in the 20th century was greatly facilitated by various modern technologies (advanced transportation and communications networks, in particular). “Future technologies may either increase or decrease the role of government in society,” he noted, “but if history shows one thing, it is that we should not neglect technology in understanding the shift from an old political equilibrium to a new one.”


Thus, those who think that the pacing problem is a one-way ratchet to emancipation from state control need to realize that technology can be used for good and bad ends, and it can be used (and abused) by governments to expand their powers and limit our liberties. Similarly, those tech critics and STS scholars who lament how the pacing problem will undermine governments, democracy, or other institutions or values without radical interventions also are going too far. They need to recognize that while it is true many new technologies will march forward at a steady clip, it does not mean that society is powerless to bring some order to technological processes. We shape our tools and then our tools shape us. And then we create still more tools to improve upon previous tools, and the process goes on and on.


John Seely Brown and Paul Duguid put it best in this 2001 essay responding to “doom-and-gloom technofuturists”:


[T]echnological and social systems shape each other. The same is true on a larger scale. . . . Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge . . . is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.


So yes, the pacing problem is real, and it will continue to raise problems for social and political systems. But as Brown and Paul Duguid suggest, we’ll constantly adapt, form and reform new dynamic equilibriums, and then “muddle through,” just as we have so many times before.


___________________


Related Reading



The Pacing Problem and the Future of Technology Regulation
Muddling Through: How We Learn to Cope with Technological Change
A Short Response to Michael Sacasas on Advice for Tech Writers
Are “Permissionless Innovation” and “Responsible Innovation” Compatible?”
Wendell Wallach on the Challenge of Engineering Better Technology Ethics
Book Review: Calestous Juma’s “Innovation and Its Enemies’”
Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions

 


 

 •  0 comments  •  flag
Share on Twitter
Published on August 16, 2018 15:41

On cable operators’ junior varsity First Amendment rights

For decades, cities, the FCC, and Congress have mandated that cable TV operators carry certain types of TV programming, including public access channels, local broadcast channels, local public television, and children’s programming. These carriage mandates have generated several First Amendment lawsuits but cable operators have generally lost. Cable operators have junior varsity First Amendment rights and the content they distribute is more regulated than, say, newspapers, Internet service providers, search engines, and Netflix. I submitted public interest comments (with JP Mohler) to the FCC this week explaining why cable operators would likely win today if they litigated these cable carriage regulations.


Regulations requiring newspapers, book publishers, or Internet service providers to carry the government’s preferred types of content are subject to strict scrutiny, which means such regulations typically don’t survive. However, cable is different, the Supreme Court held in the 1994 Turner case. The Supreme Court said regulations about what cable operators must carry are subject to intermediate–not strict–scrutiny because cable operators (in 1994) possessed about 95% of the subscription TV market and nearly every household had a single choice for subscription TV–their local cable monopoly. In the words of the Supreme Court, cable’s content regulations “are justified by the special characteristics of the cable medium: the bottleneck monopoly power exercised by cable operators.”


As a result, the FCC enforces “leased access” regulations that require cable operators to leave blank certain TV channels and give non-affiliated programmers a chance to use that channel capacity and gain viewership. Cable operators in the 1990s sued the FCC for enforcing these regulations in a 1996 case called Time Warner v. FCC. The DC Circuit relied on the 1994 Turner case and upheld the leased access rules.


Recently, however, the FCC asked whether First Amendment interests or TV competition requires giving these regulations another look. In our public interest comment, JP and I say that these rules have outlived their usefulness and cable operators would likely win a First Amendment lawsuit against the FCC today.


Two things have changed. First, cable operators have lost their “bottleneck monopoly power” that justified, in the eyes of the Supreme Court in 1994, giving cable operators weakened First Amendment protection.


Unlike in the 1990s, cable operators face significant competition in most local markets from satellite and telco TV providers. Over 99 percent of US households have at least three pay-TV options, and cable has lost over 15 million subscriber households since 2002. In 1997, when Turner II was decided, cable had over 90 percent of the pay-TV market. Cable operators’ market share has shrunk nearly every year since, and in 2015 cable had around 54 percent market share.


This competitive marketplace has stimulated massive investment and choice in TV programming. The typical household has access to far more channels than in the past. Independent researchers found that a typical US household in 1999 received about 50 TV channels. By 2014, the typical household received over 200 TV channels. In 2018, there will be an estimated 520 scripted TV series available, which is up nearly 50 percent from just five years ago.


This emergence of TV competition and its beneficial effects in programming and consumer choice undermines the justification for upholding cable content regulations like leased access.


Second, courts are more likely to view the Supreme Court’s Denver decision about leased access regulations in a new light.  In Denver, the Supreme Court divided into concurrences as to the proper First Amendment category of cable operators, and whether intermediate or strict scrutiny should apply to the leased access laws at issue. The “Marks test” is the test lower courts use for determining the holding of a Supreme Court decision where there is no majority supporting the rationale of any opinion. Viewed through the lens of the prevailing Marks test, cable operators are entitled to “bookstore owner” status for First Amendment purposes:


Given that four justices in Denver concur that one of the potential bases for deciding cable’s First Amendment status is the classification of cable operators as bookstores and three justices concur that this classification is the definitive justification for the judgment, the narrowest grounds for resolving the issue is simply this latter justification. Under the prevailing Marks test, then, lower courts will apply strict scrutiny to the leased access rules in light of the Denver decision.


For these reasons, and the need to conserve agency resources for more pressing matters, like rural broadband deployment and spectrum auctions, we encourage the FCC to discontinue these regulations.


You can read our public interest comment about the leased access regulations at the Mercatus Center website.


Leased Access Mandates Infringe on the First Amendment Rights of Cable Operators, and the FCC Should Decline to Enforce the Regulations

 •  0 comments  •  flag
Share on Twitter
Published on August 16, 2018 11:27

August 15, 2018

The Problem of Patchwork Privacy

There are a growing number of voices raising concerns about privacy rights and data security in the wake of news of data breaches and potential influence. The European Union (EU) recently adopted the heavily restrictive General Data Privacy Rule (GDPR) that favors individual privacy over innovation or the right to speak. While there has been some discussion of potential federal legislation related to data privacy, none of these attempts has truly gained traction beyond existing special protections for vulnerable users (like children) or specific information (like that of healthcare and finances). Some states, notably including California, are attempting to solve this perceived problem of data privacy on their own, but often are creating bigger problems and passing potentially unconstitutional and often poorly drafted solutions.



All states have at least minimal data breach laws and the quality of such laws both in effectiveness and impact on innovation varies. Normally states work as “laboratories of democracy” and are able to test out different regulatory schemes for new technologies with less demosclerosis than the federal process. Similarly, they are better able to account for different preferences in tradeoffs, and in some cases, they are more able to remove barriers to entry by reforming existing areas of law like licensure or products liability to accommodate a new technology. In areas like autonomous vehicles, telemedicine, and drone policy states are often leading the way to embrace these new technologies. However, a new trend in some states to formally regulate the Internet through laws aimed at data privacy or net neutrality to achieve what they perceive as failures of the federal government to act ignores the potential damage to the permissionless federal policy that made the Internet what it is today.


California has passed the California Consumer Privacy Act (CCPA) and other states are likely to follow suit. Unfortunately, these type of statutes are likely to impact innovation in a misguided attempt to correct issues with data privacy. However, these statutes could reach far beyond state borders and illustrate the potential risks of a fifty-state privacy patchwork.


These laws will likely lead to a problem in identifying what entities are covered by the privacy legislation. California’s recent CCPA defines those who are required to comply so ambiguously that a reasonable interpretation would imply the law applies so long as a single user is a resident of California whether they are accessing the website from California or not and no matter if the website purposefully avails itself of California or not.


State laws also unintentionally make it more difficult for small, local companies to compete with Internet giants. Large companies like Google and Facebook can afford the cost of additional compliance but it is more difficult for smaller and mid-size companies to cover such costs. As a result, if they are able to comply they often are more limited in their ability to fund future innovation as they instead invest resources in compliance. In a world of state based privacy laws, it’s inevitable that some would impose contradictory standards and as a result might actually make it worse rather than better as companies pick and choose which states to comply with. What is already playing out in Europe where small and mid-size companies are choosing to exit the market rather spend the cost in complying with new restrictions could play out for states with more restrictive data requirements. And it’s not just fledging startups that have difficulty, the L.A. Times and Chicago Tribune have been unavailable to Europeans since GDPR became effective as they had not completed compliance by the May deadline. In some cases companies have founded it easier to block or exclude effected users than to comply with onerous data restrictions.


In some cases, states making exceptions for companies below a certain number of user also may discourage investment at a certain point. For example the CCPA kicks in at 50,000 users. As a result there is a large marginal costs for gaining 50,001st user as compliance with the standards are immediately required. This might lead to caps on certain newer platforms or encourage innovators to look for loopholes to avoid the high cost of compliance early on.


But even if states were able to create a sort of interstate compact that created an effectively uniform state level set of privacy laws, it would still be an inappropriate use of federalism for the state to govern data privacy due to its de facto impact on interstate commerce and the First Amendment.


The Internet by its very nature transcends states borders and any state laws aimed at impacting privacy are likely to have national and global impact. This is not what is intended by federalism and not just the case for states like California with a significant amount of tech companies. If there are 50 different state laws than new online intermediaries will have  develop 50 different compliance policies or the most restrictive state will become the de facto standard for everyone left in the industry. As Jeff Kosseff points out, a world of 50 variations of the same privacy law based on users would require out-of-state content creators would likely require significant changes to their existing systems and place an undue burden on content creators and users.


Additionally, there are legitimate concerns about the First Amendment rights to share information that may be in conflict with the way privacy rights are enforced under proposed laws. Requiring otherwise lawful content to be removed silences the speaker. For example, if a friend posts a picture from a party that includes you and you ask all your data be removed is that data yours or your friends. To remove the data would silence a speaker and value one individual’s right to privacy over another’s right to speak. In some cases it seems such tradeoffs could be reasonable such as speech that is not just merely offensive but causes clear harm to the person it is about such as revenge porn, but in many cases it is far less clear. Unfortunately when faced with the crippling potential sanctions of such laws, many companies take a remove first question second approach as has been seen with copyright under the Digital Millennium Copyright Act (DMCA).


While there is a growing voice for data privacy, there seems to be little willingness on the part of consumers or regulators to make such tradeoffs. The so called “privacy paradox” where people do not undertake the necessary actions to match with their stated desire for increased data privacy and many willingly admit they prefer the convenience they receive in exchange for their data. If action on data privacy is necessary, it should occur at a federal level to avoid the patchwork problems that would result from inconsistent state laws. Any law must be narrowly tailored to respect the First Amendment rights of both users and platforms. We also must be aware of the tradeoffs that we are making between innovation and privacy when we see calls for a US GDPR. At the same time we should be concerned that as a result of the heavy burden of compliance with GDPR, a more regulated Internet where only those who can afford to comply survive may replace the permissionless start-up American driven version.


While federal preemption may be needed to address a patchwork of state privacy laws, we should be cautious and seek to avoid the mistakes of GDPR type privacy laws that place a value on individual privacy above innovation and knowledge sharing. Simple steps in providing more transparent information and requirements for notification are more likely to allow individuals to make the privacy choices that best fit their needs.


A privacy patchwork of state based “solutions” is likely to create more problems than it solves. The real solutions to our current dilemmas will come from conversations about how we balance the rewards of innovation with individual preferences for privacy.

 •  0 comments  •  flag
Share on Twitter
Published on August 15, 2018 08:43

August 10, 2018

Infrastructure Control as Innovation Regulation

The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.


If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.


To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. For example, limiting ride-sharing competition in NYC on the grounds that it hurts local taxi cartels is unappealing to citizens and the courts alike. So, NYC is now making it all about traffic congestion. Even if that regulatory rationale is bunk, it is a much harder narrative to counter in the court of public opinion or the courts of law. For that reason, we can expect more and more state and local governments to just flip the narrative about innovation regulation going forward in this fashion.


How should defenders of innovation and competition respond to state and local efforts to use infrastructure control as an indirect form of innovation regulation? First, call them out on it if it really is just naked protectionism by another name. Second, to the extent there may be something their asserted concerns about infrastructure problems, propose alternative solutions that do not freeze innovation and new entry outright. The best approach is to borrow a page out of Coase’s playbook and use smarter pricing and property rights solutions. Or perhaps use unique funding mechanisms for new and better infrastructure that could accommodate ongoing entry and innovation.


For example, my Mercatus colleague Salim Furth recently penned a column (“Let Private Companies Pay for More Bike Lanes”) in which he noted how the electric scooter company Bird has offered cities a dollar a day per scooter to help build protected bike lanes. In doing so, Furth notes, Bird is:


offering to enter the long tradition of private provision of public goods. The original subway lines were private. Private institutions have frequently built or maintained public parks. Radio broadcasts, a textbook example of a public good, are largely private in the US. Companies often provide public entertainment because they benefit from the attraction.


In a similar way, Uber has already supported usage-based road pricing to alleviate congestion.  We could imagine still other examples like this for emerging technology companies. Drone manufacturers could help create or pay for “aerial sidewalks” or easements so they can deliver goods more efficiently. Scooter and dockless bike companies could help pay for bike and scooter paths either directly or through promotional efforts. Driverless car fleet providers could help build or cover the cost of new parking garages or for road improvements that would help make autonomous systems work better in local communities.


That is the pro-consumer, pro-innovation path forward. Hopefully, state and local officials will embrace such forward-looking reform ideas instead of seeking to indirectly control new entry and competition under the guise of infrastructure management.

 •  0 comments  •  flag
Share on Twitter
Published on August 10, 2018 13:28

The Pacing Problem and the Future of Technology Regulation

[first published at The Bridge on August 9, 2018]


What happens when technological innovation outpaces the ability of laws and regulations to keep up?


This phenomenon is known as “the pacing problem,” and it has profound ramifications for the governance of emerging technologies. Indeed, the pacing problem is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.


The Innovation Cornucopia

Had Rip Van Winkle woken up his famous nap today, he’d be shocked by all the changes around him. At-home genetics tests, personal drones, driverless cars, lab-grown meats, and 3D-printed prosthetic limbs are just some of the amazing innovations that would boggle his mind. New devices and services are flying at us so rapidly that we sometimes forget that most did not even exist a short time ago. At this point, it feels like our smartphones have been in our lives forever, but even just a decade ago, very few of us had one. Likewise, plenty of people now regularly enjoy the benefits of the sharing economy, but ten years ago, Uber, Lyft, and Airbnb did not even exist. Most of the social networking platforms or online video and audio streaming services that we use today had not even been created 15 years ago. Back then, Netflix’s DVD mail subscription service seemed downright revolutionary.


With every innovation comes more questions about how the law should keep pace, or whether it even can. “There has always been a pacing problem,” observes Yale University bioethicist Wendell Wallach, author of A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. But what Wallach and many other scholars worry about today is that the pace of change has been kicked into overdrive, making it more difficult than ever for traditional legal schemes and regulatory mechanisms to stay relevant. Larry Downes refers to this as “The Law of Disruption.” In his 2009 book on this “law,” Downes showed how “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this law was becoming “a simple but unavoidable principle of modern life.”


Moore’s Law Quickens the Pace

There are three primary reasons the pacing problem is such a force in our modern world. The root cause lies in the power of “combinatorial innovation,” which is driven by “Moore’s Law.”  The Information Revolution spawned a stunning array of new technological capabilities that build on top of one another in a symbiotic fashion. Think about the shared foundational elements of most modern inventions: microchips, sensors, digital code, big data, cloud computing, remote data storage, wireless networking and geolocation capabilities, machine-learning, cryptography, and more. Each of these underlying capabilities is becoming faster, cheaper, smaller, more powerful, and easier to find and use. Innovators are combining them as part of their ongoing search for new and better ways of doing things.


Moore’s Law powers these developments. Moore’s Law is the principle named after Intel co-founder Gordon E. Moore, who first observed in 1965 that “computing would dramatically increase in power, and decrease in relative cost, at an exponential pace” in coming years. Indeed, it has continued to do so for the past half century for many information technologies. A recent Technology Policy Institute white paper noted that “data transit prices fell from about $1200 per Mbps in 1998 to $0.02 per Mbps in 2017.”


These forces are now revolutionizing other sectors as “software eats the world” and innovators utilize these new technologies to address nearly every conceivable need and want. In the field of genetics, the biological equivalent of Moore’s Law is known as the “Carlson curve.” The past two decades have seen the cost of sequencing a human genome fall from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law.


What the Public Wants, the Public Gets

The second reason the pacing problem is accelerating is that the public wants it to! It is true that many people say they are uneasy with many emerging technologies. When new gadgets and services first gain attention, a “technopanic” attitude often ensues. That is unsurprising because, as others have noted, “fear has gone hand in hand with technological advancements throughout history.”


But societal attitudes toward technological change often shift rapidly. They do so even faster today as citizens quickly assimilate new tools into their daily lives and then expect that even more and better tools will be delivered tomorrow. As more people begin to realize how new technologies improve our lives in meaningful ways, it becomes extremely hard for policymakers to take those innovations away or even tell us not to expect better ones. This relationship between technological change and societal expectations acts as an extraordinarily powerful check on the ability of regulators to “roll back the clock” on innovative activities.


Broken Government Exacerbates the Problem

Finally, the pacing problem is becoming more acute because “demosclerosis” and “kludgeocracy” have taken hold within American government. Jonathan Rauch coined the term demosclerosis in his 1999 book Government’s End: Why Washington Stopped Working to describe “government’s progressive loss of the ability to adapt.” “[A]s layer is dropped upon layer,” he argued, “the accumulated mass becomes gradually less rational and less flexible.”


Instead of cleaning up old legalistic messes and adapting to the times, government solutions are more often clumsily cobbled together to patch past problems and create temporary solutions. Steven Teles refers to this as kludgeocracy. “The complexity and incoherence of our government often make it difficult for us to understand just what that government is doing,” Teles says. Kludgeocracy creates serious costs for individual citizens, governments themselves, and to our democratic systems more generally, he argues. Taken together, demosclerosis and kludgeocracy breed highly dysfunctional governments and make it even easier for the pacing problem to speed ahead.


Can Policymakers Adapt?

Regulators are not oblivious to the challenges posed by the pacing problem. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” remarked Michael Heurta, head of the Federal Aviation Administration, in a 2016 speech regarding drones. Shortly after Huerta made those comments, the Department of Transportation released a report on the regulation of driverless car technology which noted that “The speed with which [driverless cars] are advancing, combined with the complexity and novelty of these innovations, threatens to outpace the Agency’s conventional regulatory processes and capabilities.”


Food and Drug Administration (FDA) regulators have increasingly referenced the pacing problem when discussing the challenge of keeping up with new medical innovations. The New York Times recently asked Dr. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, how the agency planned to deal with hundreds of “rogue” stem cell treatment clinics. “There are hundreds and hundreds of these clinics,” he said. “We simply don’t have the bandwidth to go after all of them at once.”


The pacing problem has even crept into antitrust enforcement. The US Department of Justice (DOJ) sought to break up Microsoft in the late 1990s, but as the legal proceedings dragged on through the early 2000’s, the market moved and made the DOJ’s case moot. Google Chrome and Mozilla Firefox emerged as legitimate competitors to Microsoft’s Internet Explorer without regulatory remedy. In the end, Microsoft reached a settlement with the DOJ that fell far short of the government’s original ambitions to bust up the firm, all because the market moved at a pace much faster than the regulator’s pace. More recent antitrust action in the US and EU also suffer from the pacing problem. Multi-year antitrust investigations reach conclusions that don’t reflect market trends in the intervening years and offer remedies that may be “too little, too late,” especially in the information technology sector.


Is the Pacing Problem Really the Pacing Benefit?

What should policymakers do in light of these new challenges? The extremes will not work. Lawmakers or regulators cannot simply double-down on the lethargic and unwieldy technocratic regulatory schemes of the past. Command-and-control tactics are not going to be effective in an age when technology evolves in a quicksilver fashion. In a world where “innovation arbitrage” is easier than ever, repressive crackdowns on new tech will often backfire. Evasive entrepreneurs will often move to those jurisdictions where their innovative acts are treated more hospitably. That, too, exacerbates the pacing problem.


From the perspective of many innovation advocates, this will make it seem like the pacing problem is more like the pacing benefit. Generally speaking, that intuition is sound. Innovation is the fundamental driver of human betterment. We need more “moonshots”—“radical but feasible solutions to important problems”—to ensure that current and future generations enjoy more choices, greater mobility, increased wealth, better health, and longer lifespans. We don’t want archaic regulatory schemes and regimes holding that back.


Constructive Solutions

But policymakers will not abandon oversight of emerging technologies altogether, nor should we want them to. The potential harms associated with some new technologies could be significant enough that a certain degree of regulatory oversight will be required. But the pacing problem means the old, inflexible, top-down approaches will need to be discarded and that the administrative state itself must become more entrepreneurial.


In a forthcoming law review article entitled, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Jennifer Skees, Ryan Hagemann, and I discuss how “soft law” mechanisms—multi-stakeholder processes, industry best practices and standards, workshops, agency guidance, and more—can help fill the governance gap as the pacing problem accelerates. Many agencies are already tapping soft law tools to help guide the development of new technologies such as driverless cars, drones, the Internet of Things, mobile medical applications, artificial intelligence, and others. In fact, we argue that soft law has already become the dominant form of technological governance for emerging tech in the US.


Critics might decry soft law as either being too lax (and open to private abuse) or too informal (and open to government abuse), but the pacing problem makes both arguments increasingly irrelevant. We need a new governance vision for the technological age. Our new governance systems must be more flexible and adaptive than the heavy-handed regulatory regimes that preceded them.


___________________


Related Reading



Adam Thierer, “Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions,” The Bridge, July 20, 2018.
Adam Thierer, “Making the World Safe for More Moonshots,” The Bridge, February 5, 2018.
Jennifer Skees, “Do You Need a License to Innovate?” The Bridge, June 29, 2018,
Andrea O’Sullivan & Adam Thierer, “3D Printers, Evasive Entrepreneurs and the Future of Tech Regulation,” The Bridge, August 1, 2018.
Adam Thierer, “Does “Permissionless Innovation” Even Mean Anything?” Technology Liberation Front, May 18, 2017.
Adam Thierer, “Wendell Wallach on the Challenge of Engineering Better Technology Ethics,” Technology Liberation Front,  April 20, 2016,
 •  0 comments  •  flag
Share on Twitter
Published on August 10, 2018 05:48

August 7, 2018

FCC’s Ajit Pai on Importance of Permissionless Innovation Vision

FCC Chairman Ajit Pai recently delivered an excellent speech at the Resurgent Conference, Austin, TX. In it, he stressed the importance of adopting a permissionless innovation policy vision to ensure a bright future for technology, economic growth, and consumer welfare. The whole thing is worth your time, but the last two paragraphs make two essential points worth highlighting.


Pai correctly notes that we should reject the sort of knee-jerk hysteria or technopanic mentality that sometimes accompanies new technologies. Instead, we should have some patience and humility in the face of uncertainty and be open to new ideas and technologies creations.


“Here’s the bottom line,” Pai concludes:


Whenever a technological innovation creates uncertainty, some will always have the knee-jerk reaction to presume it’s bad. They’ll demand that we do whatever’s necessary to maintain the status quo. Strangle it with a study. Call for a commission. Bemoan those supposedly left behind. Stipulate absolute certainty. Regulate new services with the paradigms of old.


But we should resist that temptation. “Guilty until proven innocent” is not a recipe for innovation, and it doesn’t make consumers better off. History tells us that it is not preemptive regulation, but permissionless innovation made possible by competitive free markets that best guarantees consumer welfare. A future enabled by the next generation of technology can be bright, if only we choose to let the light in.


Read the whole thing here. Good stuff. I also appreciate him citing my work on the topic, which you can find in my last book and other publications.

 •  0 comments  •  flag
Share on Twitter
Published on August 07, 2018 10:34

August 6, 2018

How Should Privacy Be Defined? A Roadmap

Privacy is an essentially contested concept. It evades a clear definition and when it is defined, scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms:


By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.


Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring.


Hagemann’s colleague Will Wilkinson laid out the benefits of this kind of philosophical exercise, which comes to me via Paul Crider. Wilkinson traces it back to very beginnings of liberal thought, which takes a bit to wind up:


Thomas Reid, the Scottish Enlightenment philosopher, pointed out that there are two ways to construct an account of what it means to really know something, rather than just believing it to be true. The first way is to develop an abstract theory of knowledge—a general criterion that separates the wheat of knowledge from the chaff of mere opinion—and then see which of our opinions qualify as true knowledge. Reid noted that this method tends to lead to skepticism, because it’s hard, if not impossible, to definitively show that any of our opinions check off all the boxes these sort of general criteria tend to set out.


That’s why Descartes ends up in a pickle and Hume leaves us in a haze of uncertainty. It’s all a big mistake, Reid said, because the belief that I have hands, for example, is on much firmer ground than any abstract notions about the nature of true knowledge that I might dream up. If my theory implies that I don’t really know that I have hands, that’s a reason to reject the theory, not a reason to be skeptical about the existence of my appendages.


According to Reid, a better way to come up with a theory of knowledge is to make a list of the things we’re very sure that we really know. Then, we see if we can devise a coherent theory that explains how we know them.


The 20th century philosopher Roderick Chisholm called these two ways of theorizing about knowledge “methodism”—start with a general theory, apply it, and see what, if anything, counts as knowledge according to the theory—and “particularism”—start with an inventory of things that we’re sure we know and then build a theory of knowledge on top of it.


Hagemann is right to build privacy on the particularism of Wilkinson, Reid and Chisholm. Given the changing nature of technology, we should take a regular “inventory of things that we’re sure we know” about privacy and then build theories on top of it.


Indeed, privacy scholarship finds its genesis in this method. While many have gotten hung up on the rights talk in the “Right to Privacy”, Warren and Brandeis actually aim “to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is.” The article looks to previous law to construct a principle for “recent inventions and business methods.” This is particularism applied to privacy.


Only a handful of court cases that are actually reviewed in the article, the most important of which is Marian Manola v. Stevens & Myers. Marian Manola was a classically trained comic opera prima donna that had a string of altercations with her company where Stevens was the manager. About a year before the case, the New York Times carried a story describing a dispute between Manola and another actor in the McCaull Opera Company. She refused to go on stage after the actor pushed her on stage and Benjamin Stevens, apparently “ignored her until she returned to her duty.” About a year later, Stevens set up the photographer Myers in a box, as a stunt to boost sales. Manola sued the both of them. Today, the case would be cited in the right to publicity literature.


Still, Warren and Brandeis were trying to survey the land of privacy harms and then build a principle on top of it.


Be it either particularism or methodism, these ways of constructing knowledge frame the moral ground, creating a field where privacy advocates and privacy scholars can converse. What unites these two groups, then, is their common rhetoric about the contours of  privacy harms. And so, what constitutes a harm is still the central question in privacy policy.

 •  0 comments  •  flag
Share on Twitter
Published on August 06, 2018 05:00

August 3, 2018

3D Printers, Evasive Entrepreneurs and the Future of Tech Regulation

By Andrea O’Sullivan and Adam Thierer (First published at The Bridge on August 1, 2018.)


Technology is changing the ways that entrepreneurs interact with, and increasingly get away from, existing government regulations. The ongoing legal battles surrounding 3D-printed weapons provide yet another timely example.


For years, a consortium of techies called Defense Distributed has sought to secure more protections for gun owners by making the code allowing someone to print their own guns available online. Rather than taking their fight to Capitol Hill and spending billions of dollars lobbying in potentially fruitless pursuits of marginal legislative victories, Defense Distributed ties their fortunes to the mast of technological determinism and blurs the lines between regulated physical reality and the open world of cyberspace.


The federal government moved fast, with gun control advocates like Senator Chuck Schumer (D-NY) and former Representative Steve Israel (D-NY) proposing legislation to criminalize Defense Distributed’s activities. They failed.


Plan B in the efforts to quash these acts of 3D-printing disobedience was to classify the Computer-aided design (CAD) files that Defense Distributed posted online as a kind of internationally-controlled munition. The US State Department engaged in a years-long legal brawl over whether or not Defense Distributed violated established International Traffic in Arms Regulations (ITAR). The group pulled down the files while the issue was examined in court, but the code had long since been uploaded to sharing sites like The Pirate Bay. The files have also been available on the Internet Archive for many years. The CAD, if you will excuse the pun, is out of the bag.


In a surprising move, the Department of Justice suddenly moved to drop the suit and settle with Defense Distributed last month. It agreed to cover the group’s legal fees and cease its attempt to regulate code already easily accessible online. While no legal precedent was set, since this was merely a settlement, it is likely that the government realized that its case would be unwinnable.


Gun control advocates did not react well to this legal retreat. This week, a group of eight state attorneys general (AGs) filed a lawsuit against the Trump administration and Defense Distributed to undo the group’s freedom to distribute their code online. Part of their argument is that the administration violated the Administrative Procedure Act as well as the Tenth Amendment by “infringing on states’ rights to regulate firearms.” But the move looks more like a last ditch effort by the AGs to exert control. Yesterday, a federal judge issued an injunction against Defense Distributed to prevent the files from being uploaded online. But as we mentioned, the files are and have been available across the internet for years now.


The case faces long odds. After all, they are essentially trying to regulate speech, which raises some clear First Amendment flags. This is precisely why the Department of Justice backed away from the case against Defense Distributed, and it echoes the federal government’s previous attempts to crackdown on strong encryption practicesmore than two decades ago. Then, like now, a group of security-minded technologists wanted to bring defense technologies that were still controlled by ITAR regulations to the masses. And then, like now, activists correctly argued that any attempt to stop their online exchanges amounted to an illegal barrier to free speech in the United States. Besides, there wasn’t much that the government could do to turn back the tide of information that had already dispersed across the wide expanse of the web.


As Cody Wilson, the founder of Defense Distributed put it: “This has been a continuous process of different levels of authority figures trying to stop it from happening and thus allowing it to happen…Of course we are going to succeed—because you all are trying to stop me. That seemed natural and ended up being true.”


Cody Wilson and Defense Distributed are not the only ones using additive manufacturing to change the world and challenge public policy in the process. The “maker” revolution is a phenomenon that is widespread and growing. A 2016 Mercatus journal article on “Guns, Limbs, and Toys: What Future for 3D Printing?” discussed several examples of how additive manufacturing is making the governance of various emerging technologies quite challenging.


For example, “e-NABLE,” which is short for “Enabling the Future,” is volunteer effort that brings together individuals from across the globe who design 3D–printed prosthetics for individuals (especially children) with limb deficiencies. Volunteers share open source blueprints and other information on various websites with others across the world. Then, they use their own printers to fabricate the limbs. Other entrepreneurs are creating custom 3D-printed orthoses to help children with cerebral palsy walk comfortably and without the aid of crutches. Off-the-shelf solutions were often ineffective and uncomfortable for many kids, which led some parents to craft custom-made orthoses for their own children to help them walk.


These “amateur” prosthetics are already being widely distributed today and helping to save many individuals and families significant amount of money, assuming they could have afforded “professional” prosthetics at all. While prosthetics are medical devices in a traditional regulatory sense, no one making their own is going to the FDA to ask permission for or a right to try new 3D–printed limbs. Instead, they are just going ahead and making new prosthetics for people in need. How should we regulate all this bottom-up innovation by average citizens (especially considering how much of it is non-commercial in character)?


Another interesting example from 2016 involved Amos Dudley, a 23-year-old college student with no prior dentistry experience who used a 3D printer and laser scanner at his university to make his own orthodontics for just $60. Dudley’s DIY plastic braces were a dangerous experiment that could have put him, or others, at risk if they followed his lead. But what should the law say about people like Dudley or the eNable innovators who are creating their own specialized medical devices in an open source, non-commercial fashion?


For a more radical example, we can look to the Four Thieves Vinegar Collective, a self-styled techno-anarchist collective dedicated to open sourcing and manufacturing alternatives to costly pharmaceutical medicines. Four Thieves harnesses the combined research output of distributed volunteer chemists, physicists, and programmers to compile and publish step-by-step instructions on how to reverse engineer treatments for maladies like AIDS and anaphylaxis. The group’s offers downloadable instructions on how to create what it calls the Apothecary MicroLab, a kind of hacked-together at-home compounding kit. The FDA is aware of, and unamused by, Four Thieves’ activities; yet it finds its hands tied by the fact that they haven’t actually done anything illegal in merely exercising their free speech rights.


These are examples of what MIT economist Eric von Hippel calls “free innovation,” or “innovations developed and given away by consumers as a ‘free good.’” Another term for this is “social entrepreneurialism.” As the name implies, an underlying social goal or mission drives social entrepreneurship.


For example, our Mercatus Center colleagues have written about how social entrepreneurs help others in need in their community following disasters. Social entrepreneurial activities are not typically in pursuit of compensation or profit, but that need not always be the case, and the distinction social and economic entrepreneurialism is sometimes quite blurry.


A great deal of additive manufacturing innovation today springs from a multitude of such “grassroots” or “household” efforts. As this sort of “evasive entrepreneurialism” spreads, it will challenge regulatory regimes that are not equipped to cope with the astonishing pace of change occurring in many technology markets today.


This does not necessarily mean that governments will be completely powerless to stop highly decentralized, bottom-up innovation of this sort. For example, with firearms regulation, a gun is still a gun, regardless of how it is manufactured. Laws governing how and where firearms are carried and used will still be in effect. But “point-of-sale” type regulatory prohibitions will not work as well, obviously.


Likewise, efforts to limit the free flow of information about 3D-printed designs will be almost impossible to enforce once blueprints are available on the internet through peer-to-peer distribution mechanisms and platforms. Finally, it would not make sense for policymakers to affix liability on the makers or distributors of 3D printers because this is a general purpose technology with many other non-controversial uses.


This means that regulation should remain focused on the user and uses of firearms or other 3D-printed devices, regardless of how they are manufactured. There may also be some other steps that governments can take to educate the public about the potential risks associated with these and other examples of free innovation and social entrepreneurship.


But policymakers should also understand that many of these bottom-up innovations are being created or used by the average citizens because they fill a public need that many felt was going unmet. Entrepreneurial efforts tend to be hard to bottle up when enough demand exists for action, and the tools are becoming increasingly decentralized, low-cost, and easy to use. Instead of trying to put those technological genies back in their bottles, we are going to need to figure out how to coexist with them.

 •  0 comments  •  flag
Share on Twitter
Published on August 03, 2018 06:06

August 2, 2018

The Definition of Technology Matters For Tech Policy And Growth

Dan Wong has a new post titled “How Technology Grows (a restatement of definite optimism)” and it is characteristically good. For tech policy wonks and policymakers, put it in your queue. The essay clocks in at 7500 words, but there’s a lot to glean from the piece. Indeed, he puts into words a number of ideas I’ve been wanting to write about. To set the stage, he begins first by defining what we mean by technology:


Technology should be understood in three distinct forms: as processes embedded into tools (like pots, pans, and stoves); explicit instructions (like recipes); and as process knowledge, or what we can also refer to as tacit knowledge, know-how, and technical experience. Process knowledge is the kind of knowledge that’s hard to write down as an instruction. You can give someone a well-equipped kitchen and an extraordinarily detailed recipe, but unless he already has some cooking experience, we shouldn’t expect him to prepare a great dish.


As he rightly points out, the United States has, for various reasons, set aside the focus on process knowledge. Where this is especially evident comes in our manufacturing base:


When firms and factories go away, the accumulated process knowledge disappears as well. Industrial experience, scaling expertise, and all the things that come with learning-by-doing will decay. I visited Germany earlier this year to talk to people in industry. One point Germans kept bringing up was that the US has de-industrialized itself and scattered its production networks. While Germany responded to globalization by moving up the value chain, the US manufacturing base mostly responded by abandoning production.


The US is an outlier among rich countries when it comes to manufacturing exports. It needs improvement.


Two comments on this.


First off, I couldn’t agree more with Dan’s emphasis on the localization of knowledge. Local knowledge networks made Silicon Valley what it is. By far the best dive into this topic is still Annalee Saxenian’s “Regional Advantage,” which charts the computer industry’s genesis in both Silicon Valley and along Boston’s Route 128. As she details throughout the book, the culture of work and the resulting firm structures in Silicon Valley differed significantly from those in Boston, giving it critical advantages to become the preeminent region of technology development.


When I read it a couple of years back, I highlighted the importance of regional knowledge hubs:


As a side comment, Saxenian mentions that many Silicon Valley workers far more rooted in the region than others. While the company man of the 1950s might move among the various arms of the firm to gain experience, which could be in different states, in the Valley, you would just move down the street. To me, that speaks volumes to the importance of regional knowledge hubs.


Without them, an industry can lose dominance.


Green tech is best and most recent example. Some have lamented that the US isn’t in the lead in producing photovoltaic tech, that we import too much of the stuff from China. Yet, China doesn’t have a labor or productivity advantage here. It comes down to scale and supply-chain management, according to research:


We find that the historical price advantage of a China-based factory relative to a U.S.-based factory is not driven by country-specific advantages, but instead by scale and supply-chain development. Looking forward, we calculate that technology innovations may result in effectively equivalent minimum sustainable manufacturing prices for the two locations. In this long-run scenario, the relative share of module shipping costs, as well as other factors, may promote regionalization of module-manufacturing operations to cost-effectively address local market demand. Our findings highlight the role of innovation, importance of manufacturing scale, and opportunity for global collaboration to increase the installed capacity of PV worldwide.


Second, Dan looks towards Germany as a model of high tech manufacturing, but there are some caveats. Most of Germany’s manufacturing prowess comes from small to medium sized firms called Mittelstand. And the reason that Mittelstand dominate seems to come from the cozy relationship German manufacturing has with the Fraunhofer Society for the Advancement of Applied Research, often just called the Fraunhofer Institutes. Sixty nine of these research institutes are scattered throughout Germany and work on applied optics, chemicals, high-speed dynamics, materials, and wind energy, just to name a few.


I haven’t done a deep dive yet into Dan’s writings to see if he has looked at this important link between research and output, but I hope he does. There is a lot to be learned from the German model and I am still hopeful that the lessons could be applied to US policy.

 •  0 comments  •  flag
Share on Twitter
Published on August 02, 2018 12:22

July 31, 2018

Why Did The Facebook Stock Drop Last Week? Some Economics Of Decision-making

A curious thing happened last week. Facebook’s stock, which had seem to have weathered the 2018 controversies, took a beating.


In the Washington Post, Craig Timberg and Elizabeth Dwoskin explained that the stock market drop was representative of a larger wave:


The cost of years of privacy missteps finally caught up with Facebook this week, sending its market value down more than $100 billion Thursday in the largest single-day drop in value in Wall Street history.


Jeff Chester of the Center for Digital Democracy piled on, describing the drop as “a privacy wake-up call that the markets are delivering to Mark Zuckerberg.”


But the downward pressure was driven by more fundamental changes. Simply put, Facebook missed its earnings target. But it is important to peer into why the company didn’t meet those targets.


As Zuckerberg noted in the earning call,


Now, perhaps one of the most important things we’ve done this year to bring people closer together is to shift News Feed to encourage connection with friends and family over passive consumption of content. We’ve launched multiple changes over the last half to News Feed that encourage more interaction and engagement between people, and we plan to keep launching more like this.


Later in the call, Facebook CFO David Wehner signaled total revenue growth rate would decelerate due to the choices made by Zuckerberg,  


We plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization, and we are also giving people who use our services more choices around data privacy, which may have an impact on our revenue growth.


Moreover, the costs would continue to rise as they also embedded more privacy and security features into the platform:


Turning now to expenses; we continue to expect that full-year 2018 total expenses will grow in the range of 50% to 60% compared to last year. In addition to increases in core product development and infrastructure, this growth is driven by increasing investment in areas like safety and security, AR/VR, marketing, and content acquisition. Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.


So, Facebook got hammered because it invested more in privacy and security, while also transitioning to less revenue generating source of content. At first glance, this might seem to signal from the market to not invest in these sort of changes. Indeed, as Blake Reid noted,


They got punished by the market for investing in less-monetized content and spending more on privacy and security. Doesn’t that send a signal to not do that?


Yes and no.


It has been widely accepted that corporations often adopt short term strategies that attempt to maximize earnings. As one well cited survey of financial executive explained, “Because of the severe market reaction to missing an earnings target, we find that firms are willing to sacrifice economic value in order to meet a short-run earnings target.”


This preference for the near term, especially for payoffs in the near term, seems to be a common feature among humans. People tend to prefer small rewards that occur now over much larger rewards that come later. This is known as hyperbolic discounting and it helps to explain why households under-save, why smokers find it tough to quit, and why firms prefer near term earnings.


Pulling together the insights from finance and behavioral psychology, two economists pointed out “that a firm exhibiting hyperbolic discounting preferences faces an underinvestment problem, i.e. there exists another feasible investment plan that improves all periods’ present values.” Conversely, a firm exhibiting time invariant preferences would invest, even if it meant a short term hit.   


Facebook is probably playing the long game. Zuckerberg has an overwhelming controlling stake in the company and wants to build value in the long term. And if these changes lead to more durability, that is, if users stay on the site longer in the next 5 or 10 years, then it makes sense to take the short term hit. It would be better to do this than have a massive exodus at some point down the road.


In the same kind of way, Amazon has been criticized for years for spending too much money on company investments to the detriment of returns. But, Amazon’s Q2 2018 numbers came in this week and they were double expectations. Bezos’ 1997 shareholder letter laid out the strategy, “We believe that a fundamental measure of our success will be the shareholder value we create over the long term.” Bezos is also more concerned with building for the long term.


I’m working on a more formal model of this, but I think there are reasons to believe that Facebook would be especially sensitive to privacy concerns. And Facebook’s missing earnings also point to a real concern about the long term viability of the platform.

 •  0 comments  •  flag
Share on Twitter
Published on July 31, 2018 15:15

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.