Adam Thierer's Blog, page 42

August 13, 2014

Study: No, US Broadband is not Falling Behind

There’s a small but influential number of tech reporters and scholars who seem to delight in making the US sound like a broadband and technology backwater. A new Mercatus working paper by Roslyn Layton, a PhD fellow at a research center at Aalborg University, and Michael Horney a researcher at the Free State Foundation, counter that narrative and highlight data from several studies that show the US is at or near the top in important broadband categories.


For example, per Pew and ITU data, the vast majority of Americans use the Internet and the US is second in the world in data consumption per capita, trailing only South Korea. Pew reveals that for those who are not online the leading reasons are lack of usability and the Internet’s perceived lack of benefits. High cost, notably, is not the primary reason for infrequent use.


I’ve noted before some of the methodological problems in studies claiming the US has unusually high broadband prices. In what I consider their biggest contribution to the literature, Layton and Horney highlight another broadband cost frequently omitted in international comparisons: the mandatory media license fees many nations impose on broadband and television subscribers.


These fees can add as much as $44 to the monthly cost of broadband. When these fees are included in comparisons, American prices are frequently an even better value. In two-thirds of European countries and half of Asian countries, households pay a media license fee on top of the subscription fees to use devices such as connected computers and TVs.


…When calculating the real cost of international broadband prices, one needs to take into account media license fees, taxation, and subsidies. …[T]hese inputs can materially affect the cost of broadband, especially in countries where broadband is subject to value-added taxes as high as 27 percent, not to mention media license fees of hundreds of dollars per year.


US broadband providers, the authors point out, have priced broadband relatively efficiently for heterogenous uses–there are low-cost, low-bandwidth connections available as well as more expensive, higher-quality connections for intensive users.


Further, the US is well-positioned for future broadband use. Unlike many wealthy countries, Americans typically have access, at least, to broadband from telephone companies (like AT&T DSL or UVerse) as well as from a local cable provider. Competition between ISPs has meant steady investment in network upgrades, despite the 2008 global recession. The story is very different in much of Europe, where broadband investment, as a percentage of the global total, has fallen noticeably in recent years. US wireless broadband is also a bright spot: 97% of Americans can subscribe to 4G LTE while only 26% in the EU have access (which partially explains, by the way, why Europeans often pay less for mobile subscriptions–they’re using an inferior product).


There’s a lot to praise in the study and it’s necessary reading for anyone looking to understand how US broadband policy compares to other nations’. The fashionable arguments that the US is at risk of falling behind technologically were never convincing–the US is THE place to be if you’re a tech company or startup, for one–but Layton and Horney show the vulnerability of that narrative with data and rigor.


 •  0 comments  •  flag
Share on Twitter
Published on August 13, 2014 09:25

August 8, 2014

Is STELA the Vehicle for Video Reform?

Even though few things are getting passed this Congress, the pressure is on to reauthorize the Satellite Television Extension and Localism Act (STELA) before it expires at the end of this year. Unsurprisingly, many have hoped this “must pass bill” will be the vehicle for broader reform of video. Getting video law right is important for our content rich world, but the discussion needs to expand much further than STELA.


Over at the American Action Forum, I explore a bit of what would be needed, and just how far the problems are rooted:


The Federal Communications Commission’s (FCC) efforts to spark localism and diversity of voices in broadcasting stands in stark contrast to relative lack of regulation governing non-broadcast content providers like Netflix and HBO, which have revolutionized delivery and upped the demand for quality content. These amorphous social goals also have limited broadcasters. Without any consideration for the competitive balance in a local market, broadcasters are barred in what they can own, are saddled with various programming restrictions, and are subject to countless limitations in the use of their spectrum. Moreover, the FCC has sought to outlaw deals between broadcasters who negotiate jointly for services and ads.


In the effort to support specific “public interest” goals, the FCC has implemented certain regulations which have cabined both broadcasters and paid TV distributors. In turn, these regulations forced companies to develop in proscribed ways, and in turn prompted further regulatory action when they have tried to innovate. Speaking about this cat-and-mouse game in the financial sector, Professor Edward Kane termed the relationship, the “regulatory dialectic.”


But unwrapping the regulatory dialectic in video law will require a vehicle far more expansive than STELA. Ultimately, I conclude,


Both the quality of programming and the means of accessing it have undergone dramatic changes in the past two decades but the regulations have not. Consumer preferences and choices are shifting, which needs to be met by alterations in the regulatory regime. STELA is one part of the puzzle, but like so many other areas of telecommunication law, a comprehensive look at the body of laws ruling video is needed. It is increasingly clear that the laws governing programming must be updated to meet the 21st century marketplace.


On this site especially, there has been a vigorous debate on just what this framework would entail. For a more comprehensive look, check out:



Geoffrey Manne’s testimony on STELA before the House of Representatives’ Energy and Commerce;
Adam Thierer’s and Brent Skorup’s paper on video law entitled, “Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals”;
Ryan Radia’s blog post entitled, “A Free Market Defense of Retransmission Consent”;
Fred Campbell’s white paper on the “Future of Broadcast Television,” as well as his various posts on the subject;
And Hance Hanley’s posts on video law.

 •  0 comments  •  flag
Share on Twitter
Published on August 08, 2014 11:21

August 6, 2014

You know how IP creates millions of jobs? That’s pseudoscientific baloney

In 2012, the US Chamber of Commerce put out a report claiming that intellectual property is responsible for 55 million US jobs—46 percent of private sector employment. This is a ridiculous statistic if you merely stop and think about it for a minute. But the fact that the statistic is ridiculous doesn’t mean that it won’t continue to circulate around Washington. For example, last year Rep. Marsha Blackburn cited it uncritically in an oped in The Hill.


In a new paper from Mercatus (here’s the PDF), Ian Robinson and I expose this statistic, and others like them, as pseudoscience. They are based on incredibly shoddy and misleading reasoning. Here’s the abstract of the paper:


In the past two years, a spate of misleading reports on intellectual property has sought to convince policymakers and the public that implausibly high proportions of US output and employment depend on expansive intellectual property (IP) rights. These reports provide no theoretical or empirical evidence to support such a claim, but instead simply assume that the existence of intellectual property in an industry creates the jobs in that industry. We dispute the assumption that jobs in IP-intensive industries are necessarily IP-created jobs. We first explore issues regarding job creation and the economic efficiency of IP that cut across all kinds of intellectual property. We then take a closer look at these issues across three major forms of intellectual property: trademarks, patents, and copyrights.


As they say, read the whole thing, and please share with your favorite IP maximalist.


 •  0 comments  •  flag
Share on Twitter
Published on August 06, 2014 07:26

July 17, 2014

New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

Today the New York Department of Financial Services released a proposed framework for licensing and regulating virtual currency businesses. Their “BitLicense” proposal [PDF] is the culmination of a yearlong process that included widely publicizes hearings.


My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.


That said, I’m glad DFS will be accepting comments on the proposed framework because there are a few things that can probably be improved or clarified. For example:




Licensees would be required to maintain “the identity and physical addresses of the parties involved” in “all transactions involving the payment, receipt, exchange or conversion, purchase, sale, transfer, or transmission of Virtual Currency.” That seems a bit onerous and unworkable.


Today, if you have a wallet account with Coinbase, the company collects and keeps your identity information. Under New York’s proposal, however, they would also be required to collect the identity information of anyone you send bitcoins to, and anyone that sends bitcoins to you (which might be technically impossible). That means identifying every food truck you visit, and every alpaca sock merchant you buy from online.


The same would apply to merchant service companies like BitPay. Today they identify their merchant account holders–say a coffee shop–but under the proposed framework they would also have to identify all of their merchants’ customers–i.e. everyone who buys a cup of coffee. Not only is this potentially unworkable, but it also would undermine some of Bitcoin’s most important benefits. For example, the ability to trade across borders, especially with those in developing countries who don’t have access to electronic payment systems, is one of Bitcoin’s greatest advantages and it could be seriously hampered by such a requirement.


The rationale for creating a new “BitLicense” specific to virtual currencies was to design something that took the special characteristics of virtual currencies into account (something existing money transmission rules didn’t do). I hope the rule can be modified so that it can come closer to that ideal.




The definition of who is engaged in “virtual currency business activity,” and thus subject to the licensing requirement, is quite broad. It has the potential to swallow up online wallet services, like Blockchain, who are merely providing software to their customers rather than administering custodial accounts. It might potentially also include non-financial services like Proof of Existence, which provides a notary service on top of the Bitcoin block chain. Ditto for other services, perhaps like , that use cryptocurrency tokens to track assets like domain names.




The rules would also require a license of anyone “controlling, administering, or issuing a Virtual Currency.” While I take this to apply to centralized virtual currencies, some might interpret it to also mean that you must acquire a license before you can deploy a new decentralized altcoin. That should be clarified.




In order to grow and reach its full potential, the Bitcoin ecosystem needs regulatory certainty from dozens of states. New York is taking a leading role in developing that a regulatory structure and the path it chooses will likely influence other states. This is why we have to make sure that New York gets it right. They are on the right track and I look forward to engaging in the comment process to help them get all the way there.


 •  0 comments  •  flag
Share on Twitter
Published on July 17, 2014 10:56

June 26, 2014

SCOTUS Rules in Favor of Freedom and Privacy in Key Rulings

Yesterday, June 25, 2014, the U.S. Supreme Court issued two important opinions that advance free markets and free people in Riley v. California and ABC v. AereoI’ll soon have more to say about the latter case, Aereo, in which my organization filed a amicus brief along with the International Center for Law and Economics. But for now, I’d like to praise the Court for reaching the right result in a duo of cases involving police warrantlessly searching cell phones incident to lawful arrests.


Back in 2011, when I wrote in a feature story in Ars Technica—which I discussed on these pages—police in many jurisdictions were free to search the cell phones of individuals incident to their arrest. If you were arrested for a minor traffic violation, for instance, the unencrypted contents of your cell phone were often fair game for searches by police officers.


Now, however, thanks to the Supreme Court, police may not search an arrestee’s cell phone incident to her or his arrest—without specific evidence giving rise to an exigency that justifies such a search. Given the broad scope of offenses for which police may arrest someone, this holding has important implications for individual liberty, especially in jurisdictions where police often exercise their search powers broadly.


 


 •  0 comments  •  flag
Share on Twitter
Published on June 26, 2014 00:36

June 17, 2014

Muddling Through: How We Learn to Cope with Technological Change

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?


In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).


It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues.


Optimistic (“Heaven”) vs. Pessimistic (“Hell”) Scenarios


Modern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.


In the past century, for example, French philosopher Jacques Ellul (The Technological Society), German historian Oswald Spengler (Man and Technics), and American historian Lewis Mumford (Technics and Civilization) penned critiques of modern technological processes that took a dour view of technological innovation and our collective ability to adapt positively to it. (Concise summaries of their thinking can be found in Christopher May’s edited collection of essays, Key Thinkers for the Information Society.)


These critics worried about the subjugation of humans to “technique” or “technics” and feared that technology and technological processes would come to control us before we learned how to control them. Media theorist Neil Postman was the most notable of the modern information technology critics and served as the bridge between the industrial era critics (like Ellul, Spengler, and Mumford) and some of today’s digital age skeptics (like Evgeny Morozov and Nick Carr). Postman decried the rise of a “technopoly” — “the submission of all forms of cultural life to the sovereignty of technique and technology” — that would destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.” We see that attitude on display in countless works of technological criticism since then.


Of course, there’s been some pushback from some futurists and technological enthusiasts. But there’s often a fair amount of irrational exuberance at work in their tracts and punditry. Many self-proclaimed “futurists” have predicted that various new technologies would produce a nirvana that would overcome human want, suffering, ignorance, and more.


In a 2010 essay, I labeled these two camps technological “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.


Sadly, when I wrote that earlier piece, I was not aware of a similar (and much better) framing of this divide that was developed by science writer Joel Garreau in his terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.


Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)


Theories of Collapse: Why Does Doomsaying Dominate Discussions about New Technologies?

Indeed, in examining the way new technologies and inventions have long divided philosophers, scientists, pundits, and the general public, one can find countless examples of that sort of fear and loathing at work. “Armageddon has a long and distinguished history,” Garreau notes. “Theories of progress are mirrored by theories of collapse.” (p. 149)


In that regard, Garreau rightly cites Arthur Herman’s magisterial history of apocalyptic theories, The Idea of Decline in Western History, which documents “declinism” over time. The irony of much of this pessimistic declinist thinking, Herman notes, is that:


In effect, the very things modern society does best — providing increasing economic affluence, equality of opportunity, and social and geographic mobility — are systematically deprecated and vilified by its direct beneficiaries. None of this is new or even remarkable.” (p. 442)


Why is that? Why has the “Hell” scenario been such a dominant reoccurring theme in past writing and commentary throughout history, even though the general trend has been steady improvements in human health, welfare, and convenience?


There must be something deeply rooted in the human psyche that accounts for this tendency. As I have discussed in my new book as well as my big “Technopanics” law review article, our innate tendency to be pessimistic but also want to be certain about the future means that “the gloom-mongers have it easy,” as author Dan Gardner argues in his book, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better. He continues on to note of the techno-doomsday pundits:


Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster. (p. 140-1)


Similarly, in his new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson notes that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)


Another explanation is that humans are sometimes very poor judges of the relative risks to themselves or those close to them. Harvard University psychology professor Steven Pinker, author of The Blank Slate: The Modern Denial of Human Nature, notes:


The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. (p. 232)


Put simply, there exists a wide variety of explanations for why our collective first reaction to new technologies often is one of dystopian dread. In my work, I have identified several other factors, including: generational differences; hyper-nostalgia; media sensationalism; special interest pandering to stoke fears and sell products or services; elitist attitudes among intellectuals; and the so-called “third-person effect hypothesis,” which posits that when some people encounter perspectives or preferences at odds with their own, they are more likely to be concerned about the impact of those things on others throughout society and to call on government to “do something” to correct or counter those perspectives or preferences.


Some combination of these factors ends up driving the initial resistance we have see to new technologies that disrupted long-standing social norms, traditions, and institutions. In the extreme, it results in that gloom-and-doom, sky-is-falling disposition in which we are repeatedly told how humanity is about to be steam-rolled by some new invention or technological development.


The “Prevail” (or “Muddling Through”) Scenario

“The good news is that end-of-the-world predictions have been around for a very long time, and none of them has yet borne fruit,” Garreau reminds us. (p. 148) Why not? Let’s get back to his framework for the answer. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”


That pretty much sums up my own perspective on things, and in the remainder of this essay I want sketch out the reasons why I think the “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process.


As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154) As John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:


technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.


It is this process of “constantly forming and reforming new dynamic equilibriums” that interests me most. In a recent exchange with Michael Sacasas – one of the most thoughtful modern technology critics I’ve come across — I noted that the nature of individual and societal acclimation to technological change is worthy of serious investigation if for no other reason that it has continuously happened! What I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies disrupted our personal, social, economic, cultural, and legal norms.


In a response to me, Sacasas put forth the following admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” This is undoubtedly true, but it does not undermine the reality of societal adaptation. What can we learn from this? What were the mechanics of that adaptive process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?


Of course, this raises an entirely different issue: What metrics are we using to judge whether “the changes were inconsequential or benign”? As I noted in my exchange with Sacasas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”


Resiliency: Why Do the Skeptics Never Address It (and Its Benefits)?

Nonetheless, I believe that while technological change often brings sweeping and quite consequential change, there is great value in the very act of living through it.


In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.


What we’re talking about here is resiliency. Andrew Zolli and Ann Marie Healy, authors of Resilience: Why Things Bounce Back, define resilience as “the capacity of a system, enterprise, or a person to maintain its core purpose and integrity in the face of dramatically changed circumstances.” (p. 7) “To improve your resilience,” they note, “is to enhance your ability to resist being pushed from your preferred valley, while expanding the range of alternatives that you can embrace if you need to. This is what researchers call preserving adaptive capacity—the ability to adapt to changed circumstances while fulfilling once core purpose—and it’s an essential skill in an age of unforeseeable disruption and volatility.” (p. 7-8, emphasis in original) Moreover, they note, “by encouraging adaptation, agility, cooperation, connectivity, and diversity, resilience-thinking can bring us to a different way of being in the world, and to a deeper engagement with it.” (p. 16)


Even if you one doesn’t agree with all of that, again, I would think one would find great value in studying the process by which such adaptation happens precisely because it does happen so regularly. And then we could argue about whether it was all really worth it! Specially, was it worth whatever we lost in the process (i.e., a change in our old moral norms, our old privacy norms, our old institutions, our old business models, our old laws, or whatever else)?


As Sacasas correctly argues, “That people before us experienced similar problems does not mean that they magically cease being problems today.” Again, quite right. On the other hand, the fact that people and institutions learned to cope with those concerns and become more resilient over time is worthy of serious investigation because somehow we “muddled through” before and we’ll have to muddle through again. And, again, what we learned from living through that process may be extremely valuable in its own right.


Of Course, Muddling Through Isn’t Always Easy

Now, let’s be honest about this process of “muddling through”: it isn’t always neat or pretty. To put it crudely, sometimes muddling through really sucks! Think about the modern technologies that violate our visceral sense of privacy and personal space today. I am an intensely private person and if I had a life motto it would probably be: “Leave Me Alone!” Yet, sometimes there’s just no escaping the pervasive reach of modern technologies and processes. On the other hand, I know that, like so many others, I derive amazing benefits from all these new technologies, too. So, like most everyone else I put up with the downsides because, on net, there are generally more upsides.


Almost every digital service that we use today presents us with these trade-offs. For example, email has allowed us to connect with a constantly growing universe of our fellow humans and organizations. Yet, spam clutters our mailboxes and the sheer volume of email we get sometimes overwhelms us. Likewise, in just the past five years, smartphones have transformed our lives in so many ways for the better in terms of not just personal convenience but also personal safety. On the other hand, smartphones have become more than a bit of nuisance in certain environments (theaters, restaurants, and other closed spaces.) And they also put our safety at risk when we use them while driving automobiles.


But, again, we adjust to most of these new realities and then we find constructive solutions to the really hard problems – yes, and that sometimes includes legal remedies to rectify serious harms. But a certain amount of social adaptation will, nonetheless, be required. Law can only slightly slow that inevitability; it can’t stop it entirely. And as messy and uncomfortable as muddling through can be, we have to (a) be aware of what we gain in the process and (b) ask ourselves what the cost of taking the alternative path would be. Attempts to through a wrench in the works and derail new innovations or delay various types of technological change are always going to be tempting, but such interventions will come at a very steep cost: less entreprenurialism, diminished competition, stagnant markets, higher prices, and fewer choices for citizens. As I note in my new book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that many best-case scenarios will never come about.


Social Resistance / Pressure Dynamics

There’s another part to this story that often gets overlooked. “Muddling through” isn’t just some sort of passive process where individuals and institutions have to figure out how to cope with technological change. Rather, there is an active dynamic at work, too. Individuals and institutions push back and actively shape their tools and systems.


In a recent Wired essay on public attitudes about emerging technologies such as the controversial Google Glass, Issie Lapowsky noted that:


If the stigma surrounding Google Glass (or, perhaps more specifically, “Glassholes”) has taught us anything, it’s that no matter how revolutionary technology may be, ultimately its success or failure ride on public perception. Many promising technological developments have died because they were ahead of their times. During a cultural moment when the alleged arrogance of some tech companies is creating a serious image problem, the risk of pushing new tech on a public that isn’t ready could have real bottom-line consequences.


In my new book, I spend some time think about this process of “norm-shaping” through social pressure, activist efforts, educational steps, and even public shaming. A recent Ars Technica essay by Joe Silver offered some powerful examples of how when “shamed on Twitter, corporations do an about-face.” Silver notes that “A few recent case-study examples of individuals who felt they were wronged by corporations and then took to the Twitterverse to air their grievances show how a properly placed tweet can be a powerful weapon for consumers to combat corporate malfeasance.” In my book and in recent law review articles, I have provided other examples how this works at both a corporate and individual level to constrain improper behavior and protect various social norms.


Edmund Burke once noted that, “Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.” In other words, more than laws can regulate behavior — whether it is organizational behavior or individual behavior. Again, it’s another way we learn to cope and “muddle through.” Again, check out my book for several other examples.


A Case Study: The Long-Standing “Problem” of Photography

Let’s bring all this together and be more concrete about it by using a case study: photography. With all the talk of how unsettling various modern technological developments are, they really pale in comparison to just how jarring the advent of widespread public photography must have been in the late 1800s and beyond.


Indeed, the camera was viewed as a highly disruptive force when photography became more widespread. In fact, the most important essay ever written on privacy law, Samuel D. Warren and Louis D. Brandeis’s famous 1890 Harvard Law Review essay on “The Right to Privacy,” decried the spread of public photography. The authors lamented that “instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life” and claimed that “numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”


Warren and Brandeis weren’t alone. Plenty of other critics existed and many average citizens were probably outraged by the rise of cameras and public photography. Yet, personal norms and cultural attitudes toward cameras and public photography evolved quite rapidly and they became ingrained in human experience. At the same time, social norms and etiquette evolved to address those who would use cameras in inappropriate, privacy-invasive ways.


Again, we muddled through. And we’ve had to continuously muddle through in this regard because photography presents us with a seemingly endless set of new challenges. As cameras grow still smaller and get integrated into other technologies (most recently, smartphones, wearable technologies, and private drones), we’ve had to learn to adjust and accommodate. With wearables technologies (check out Narrative, Butterflye, and Autographer, for example), personal drones (see “Drones are the future of selfies,”) and other forms of microphotography all coming online now, we’ll have to adjust still more and develop new norms and coping mechanisms. There’s never going to be an end to this adjustment process.


Toward Pragmatic Optimism

Should we really remain bullish about humanity’s prospects in the midst of all this turbulent change? I think so.


Again, long before the information revolution took hold, the industrial revolution produced its share of cultural and economic backlashes, and it is still doing so today. Most notably, many Malthusian skeptics and environmental critics lamented the supposed strain of population growth and industrialization on social and economic life. Catastrophic predictions followed.


In his 2007 book, Prophecies of Doom and Scenarios of Progress, Paul Dragos Aligicia, a colleague of mine at the Mercatus Center, documented many of these industrial era “prophecies of doom” and described how this “doomsday ideology” was powerfully critiqued by a handful of scholars — most notably Herman Kahn and Julian Simon. Aligicia explains that Kahn and Simon argued for, “the alternative paradigm, the pro-growth intellectual tradition that rejected the prophecies of doom and called for realism and pragmatism in dealing with the challenge of the future.”


Kahn and Simon were pragmatic optimists or what author Matt Ridley calls “rational optimists.” They were bullish about the future and the prospects for humanity, but they were not naive regarding the many economic and scosial challenges associated with technological change. Like Kahn and Simon, we should embrace the amazing technological changes at work in today’s information age but with a healthy dose of humility and appreciation for the disruptive impact and pace of that change.


But the rational optimists never get as much attention as the critics and catastrophists. “For 200 years pessimists have had all the headlines even though optimists have far more often been right,” observes Ridley. “Arch-pessimists are feted, showered with honors and rarely challenged, let alone confronted with their past mistakes.” At least part of the reason for that, as already noted, goes back to the amazing rhetorical power of good intentions. Techno-pessimists often exhibit a deep passion about their particular cause and are typically given more than just the benefit of doubt in debates about progress and the future; they are treated as superior to opponents who challenge their perspectives or proposals. When a privacy advocate says they are just looking out consumers, or an online safety claims they have the best interests of children in mind, or a consumer advocate argues that regulation is needed to protect certain people from some amorphous harm, they are assuming the moral high ground through the assertion of noble-minded intentions. Even if their proposals will often fail to bring about the better state of affairs they claim or derail life-enriching innovations, they are more easily forgiven for those mistakes precisely because of their fervent claim of noble-minded intentions.


If intentions are allowed to trump empiricism and a general openness to change, however, the results for a free society and for human progress will be profoundly deleterious. That is why, when confronted with pessimistic, fear-based arguments, the pragmatic optimist must begin by granting that the critics clearly have the best of intentions, but then point out how intentions can only get us so far in the real-world, which is full of complex trade-offs.


The pragmatic optimist must next meticulously and dispassionately outline the many reasons why restricting progress or allowing planning to enter the picture will have many unintended consequences and hidden costs. The trade-offs must be explained in clear terms. Examples of previous interventions that went wrong must be proffered.


The Evidence Speaks for Itself

Luckily, we pragmatic optimists have plenty of evidence working in our favor when making this case. As Pulitzer Prize-winning historian Richard Rhodes noted in his 1999 book, Visions of Technology: A Century of Vital Debate About Machines Systems And The Human World:


it’s surprising that [many intellectual] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals? (p. 23)


Great question, and one that we should never stop asking the techno-critics to answer. After all, as Joel Mokyr notes in his wonderful 1990 book, Lever of Riches: Technological Creativity and Economic Progress, “Without [technological creativity], we would all still live nasty and short lives of toil, drudgery, and discomfort.” (p. viii) “Technological progress, in that sense, is worthy of its name,” he says. “It has led to something that we may call an ‘achievement,’ namely the liberation of a substantial portion of humanity from the shackles of subsistence living.” (p. 288) Specifically,


The riches of the post-industrial society have meant longer and healthier lives, liberation from the pains of hunger, from the fears of infant mortality, from the unrelenting deprivation that were the part of all but a very few in preindustrial society. The luxuries and extravagances of the very rich in medieval society pale compared to the diet, comforts, and entertainment available to the average person in Western economies today. (p. 303)


In his new book, Smaller Faster Lighter Denser Cheaper: How Innovation Keeps Proving the Catastrophists Wrong, Robert Bryce hammers this point home when he observes that:


The pessimistic worldview ignores an undeniable truth: more people are living longer, healthier, freer, more peaceful, lives than at any time in human history… the plain reality is that things are getting better, a lot better, for tens of millions of people around the world. Dozens of factors can be cited for the improving conditions of humankind. But the simplest explanation is that innovation is allowing us to do more with less.


This is framework Herman Kahn, Julian Simon, and the other champions of progress used to deconstruct and refute the pessimists of previous eras. In line with that approach, we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. As Kahn taught us long ago, is that when it comes to technological progress and humanity’s ingenious responses to it, “we should expect to go on being surprised” — and in mostly positive ways. Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies. As Mokyr noted in his recent City Journal essay on “The Next Age of Invention”:


Much like medication, technological progress almost always has side effects, but bad side effects are rarely a good reason not to take medication and a very good reason to invest in the search for second-generation drugs. To a large extent, technical innovation is a form of adaptation—not only to externally changing circumstances but also to previous adaptations.


In sum, we need to have a little faith in the ability of humanity to adjust to an uncertain future, no matter what it throws at us. We’ll muddle through and come out better because of what we have learned in the process, just as we have so many times before.


I’ll give venture capitalist Marc Andreessen the last word on this since he’s been on an absolute tear on Twitter lately when discussing many of the issues I’ve raised in this essay. While addressing the particular fear that automation is running amuck and that robots will eat all our jobs, Andreessen eloquently noted:


We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.


Me too, buddy. Me too.


______________________________________


Additional Reading:


Journal articles & book chapters:



Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom (2014)
Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology, 14 (2013): 309–86.
The Case for Internet Optimism, Part 1: Saving the Net from Its Detractors,” in The Next Digital Decade: Essays on the Future of the Internet, ed.Berin Szoka and Adam Marcus (Washington, DC:Tech Freedom, 2010), 57–87.
The Pursuit of Privacy in a World Where Information Control Is Failing,” Harvard Journal of Law & Public Policy, Vol. 36 (2013): 409–55.
Privacy Law’s Precautionary Principle Problem,” Maine Law Review, Vol. 66, No. 2 (2014): 467-86.

Blog posts:



Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society,Technology Liberation Front, January 31, 2010.
Who Really Believes in ‘Permissionless Innovation’?” Technology Liberation Front, March 4, 2013.
What Does It Mean to ‘Have a Conversation’ about a New Technology?” Technology Liberation Front, May 23, 2013.
Planning for Hypothetical Horribles in Tech Policy Debates,” Technology Liberation Front, August 6, 2013.
On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013.
Why Do We Always Sell the Next Generation Short?” Forbes, January 8, 2012.
The Six Things That Drive ‘Technopanics,’” Forbes, March 4, 2012.
10 Things Our Kids Will Never Worry about Thanks to the Information Revolution,” Forbes, December 18, 2011.

 •  0 comments  •  flag
Share on Twitter
Published on June 17, 2014 10:38

June 16, 2014

New Law Review Article: “Privacy Law’s Precautionary Principle Problem”

My latest law review article is entitled, “Privacy Law’s Precautionary Principle Problem,” and it appears in Vol. 66, No. 2 of the Maine Law Review. You can download the article on my Mercatus Center page, on the Maine Law Review website, or via SSRN. Here’s the abstract for the article:


Privacy law today faces two interrelated problems. The first is an information control problem. Like so many other fields of modern cyberlaw—intellectual property, online safety, cybersecurity, etc.—privacy law is being challenged by intractable Information Age realities. Specifically, it is easier than ever before for information to circulate freely and harder than ever to bottle it up once it is released.


This has not slowed efforts to fashion new rules aimed at bottling up those information flows. If anything, the pace of privacy-related regulatory proposals has been steadily increasing in recent years even as these information control challenges multiply.


This has led to privacy law’s second major problem: the precautionary principle problem. The precautionary principle generally holds that new innovations should be curbed or even forbidden until they are proven safe. Fashioning privacy rules based on precautionary principle reasoning necessitates prophylactic regulation that makes new forms of digital innovation guilty until proven innocent.


This puts privacy law on a collision course with the general freedom to innovate that has thus far powered the Internet revolution, and privacy law threatens to limit innovations consumers have come to expect or even raise prices for services consumers currently receive free of charge. As a result, even if new regulations are pursued or imposed, there will likely be formidable push-back not just from affected industries but also from their consumers.


In light of both these information control and precautionary principle problems, new approaches to privacy protection are necessary. We need to invert the process of how we go about protecting privacy by focusing more on practical “bottom-up” solutions—education, empowerment, public and media pressure, social norms and etiquette, industry self-regulation and best practices, and an enhanced role for privacy professionals within organizations—instead of “top-down” legalistic solutions and regulatory techno-fixes. Resources expended on top-down regulatory pursuits should instead be put into bottom-up efforts to help citizens better prepare for an uncertain future.


In this regard, policymakers can draw important lessons from the debate over how best to protect children from objectionable online content. In a sense, there is nothing new under the sun; the current debate over privacy protection has many parallels with earlier debates about how best to protect online child safety. Most notably, just as top-down regulatory constraints came to be viewed as constitutionally-suspect and economically inefficient, and also highly unlikely to even be workable in the long-run for protecting online child safety, the same will likely be true for most privacy related regulatory enactments.


This article sketches out some general lessons from those online safety debates and discusses their implications for privacy policy going forward.


Read the full article here [PDF].


Related Material:



Book: Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom
Law review article: “The Pursuit of Privacy in a World Where Information Control Is Failing,” Harvard Journal of Law & Public Policy, 36 (2013): 409–55.
Law review article: “A Framework for Benefit-Cost Analysis in Digital Privacy Debates,” George Mason University Law Review, 20, no. 4 (Summer 2013): 1055–105.
Law review article: “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology, 14 (2013): 309–86.

 


Adam Thierer – Privacy Law's Precautionary Problem (Maine Law Review, 2014) by Adam Thierer



 •  0 comments  •  flag
Share on Twitter
Published on June 16, 2014 10:50

June 12, 2014

video: Cap Hill Briefing on Emerging Tech Policy Issues

I recently did a presentation for Capitol Hill staffers about emerging technology policy issues (driverless cars, the “Internet of Things,” wearable tech, private drones, “biohacking,” etc) and the various policy issues they would give rise to (privacy, safety, security, economic disruptions, etc.). The talk is derived from my new little book on “Permissionless Innovation,” but in coming months I will be releasing big papers on each of the topics discussed here.



Additional Reading:




Technology Policy: A Look Ahead by Jerry Brito, Eli Dourado, and Adam Thierer
Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom by Adam Thierer
Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others? by Adam Thierer
Where is the innovation in health care? by Veronique De Rugy


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2014 08:53

June 9, 2014

Has Copyright Gone Too Far? Watch This “Hangout” to Find Out

Last week, the Mercatus Center and the R Street Institute co-hosted a video discussion about copyright law. I participated in the Google Hangout, along with co-liberator Tom Bell of Chapman Law School (and author of the new book Intellectual Privilege), Mitch Stoltz of the Electronic Frontier Foundation, Derek Khanna, and Zach Graves of the R Street Institute. We discussed the Aereo litigation, compulsory licensing, statutory damages, the constitutional origins of copyright, and many more hot copyright topics.


You can watch the discussion here:


 


 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2014 18:20

Outdated Policy Decisions Don’t Dictate Future Rights in Perpetuity

Congressional debates about STELA reauthorization have resurrected the notion that TV stations “must provide a free service” because they “are using public spectrum.” This notion, which is rooted in 1930s government policy, has long been used to justify the imposition of unique “public interest” regulations on TV stations. But outdated policy decisions don’t dictate future rights in perpetuity, and policymakers abandoned the “public spectrum” rationale long ago.


All wireless services use the public spectrum, yet none of them are required to provide a free commercial service except broadcasters. Satellite television operators, mobile service providers, wireless Internet service providers, and countless other commercial spectrum users are free to charge subscription fees for their services.


There is nothing intrinsic in the particular frequencies used by broadcasters that justifies their discriminatory treatment. Mobile services use spectrum once allocated to broadcast television, but aren’t treated like broadcasters.


The fact that broadcast licenses were once issued without holding an auction is similarly irrelevant. All spectrum licenses were granted for free before the mid-1990s. For example, cable and satellite television operators received spectrum licenses for free, but are not required to offer their video services for free.


If the idea is to prevent companies who were granted free licenses from receiving a “windfall”, it’s too late. As Jeffrey A. Eisenach has demonstrated, “the vast majority of current television broadcast licensees [92%] have paid for their licenses through station transactions.”


The irrelevance of the free spectrum argument is particularly obvious when considering the differential treatment of broadcast and satellite spectrum. Spectrum licenses for broadcast TV stations are now subject to competitive bidding at auction while satellite television licenses are not. If either service should be required to provide a free service on the basis of spectrum policy, it should be satellite television.


Although TV stations were loaned an extra channel during the DTV transition, the DTV transition is over. Those channels have been returned and were auctioned for approximately $19 billion in 2008. There is no reason to hold TV stations accountable in perpetuity for a temporary loan.


Even if there were, the loan was not free. Though TV stations did not pay lease fees for the use of those channels, they nevertheless paid a heavy price. TV stations were required to invest substantial sums in HDTV technology and to broadcast signals in that format long before it was profitable. The FCC required “rapid construction of digital facilities by network-affiliated stations in the top markets, in order to expose a significant number of households, as early as possible, to the benefits of DTV.” TV stations were thus forced to “bear the risks of introducing digital television” for the benefit of consumers, television manufacturers, MVPDs, and other digital media.


The FCC did not impose comparable “loss leader” requirements on MVPDs. They are free to wait until consumer demand for digital and HDTV content justifies upgrading their systems — and they are still lagging TV stations by a significant margin. According to the FCC, only about half of the collective footprints of the top eight cable MVPDs had been transitioned to all-digital channels at the end of 2012. By comparison, the DTV transition was completed in 2009.


There simply is no satisfactory rationale for requiring broadcasters to provide a free service based on their use of spectrum or the details of past spectrum licensing decisions. If the applicability of a free service requirement turned on such issues, cable and satellite television subscribers wouldn’t be paying subscription fees.


 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2014 06:19

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.