Adam Thierer's Blog, page 46

April 7, 2014

New Books in Technology podcast about my new book

It was my great pleasure to join Jasmine McNealy last week on the “New Books in Technology” podcast to discuss my new book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. (A description of my book can be found here.)


My conversation with Jasmine was wide-ranging and lasted 47 minutes. The entire show can be heard here if you’re interested.


By the way, if you don’t follow Jasmine, you should begin doing so immediately. She’s on Twitter and here’s her page at the University of Kentucky School of Library and Information Science.  She’s doing some terrifically interesting work. For example, check out her excellent essay on “Online Privacy & The Right To Be Forgotten,” which I commented on here.

 •  0 comments  •  flag
Share on Twitter
Published on April 07, 2014 07:33

April 6, 2014

Can NSA Force Telecom Companies To Collect More Data?

Recent reports highlight that the telephone meta-data collection efforts of the National Security Agency are being undermined by the proliferation of flat-rate, unlimited voice calling plans.  The agency is collecting data for less than a third of domestic voice traffic, according to one estimate.


It’s been clear for the past couple months that officials want to fix this, and President Obama’s plan for leaving meta-data in the hands of telecom companies—for NSA to access with a court order—might provide a back door opportunity to expand collection to include all calling data.  There was a potential new twist last week, when Reuters seemed to imply that carriers could be forced to collect data for all voice traffic pursuant to a reinterpretation of the current rule.


While the Federal Communications Commission requires phone companies to retain for 18 months records on “toll” or long-distance calls, the rule’s application is vague (emphasis added) for subscribers of unlimited phone plans because they do not get billed for individual calls.


The current FCC rule (47 C.F.R. § 42.6) requires carriers to retain billing information for “toll telephone service,” but the FCC doesn’t define this familiar term.  There is a statutory definition, but you have to go to the Internal Revenue Code to find it.  According to 26 U.S.C. § 4252(b),


the term “toll telephone service” means—


(1) a telephonic quality communication for which


(A) there is a toll charge which varies in amount with the distance and elapsed transmission time of each individual communication…


This Congressional definition describes the dynamics of long-distance pricing in 1965, but it pre-dates the FCC rule (1986) and it’s still on the books.


Distance subsequently became virtually irrelevant as a cost factor due to improving technology by the 1990s, when long-distance prices became based on minutes of use only (although clashing federal and state regulatory regimes frequently did result in higher rates for many short-haul intrastate calls as compared to long-haul interstate calls).  Incidentally, it was estimated at the time that telephone companies spent between 30 and 40 percent of their revenues on their billing systems.


In any event, with the elimination of distance-sensitive pricing, the Internal Revenue Service’s efforts to collect the Telephone Excise Tax—first enacted during the Spanish American War—were stymied.  In 2006, the IRS announced it would no longer litigate whether a toll charge that varies with elapsed transmission time but not distance (time-only service) is taxable “toll telephone service.”


I don’t see why telecom companies are required to collect and store for 18 months any telephone data, since it’s hard to imagine they are providing any services these days that actually qualify as “toll telephone service,” as that term is currently defined in the United States Code.

 •  0 comments  •  flag
Share on Twitter
Published on April 06, 2014 18:44

April 3, 2014

A Short Response to Michael Sacasas on Advice for Tech Writers

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.


______________________________


Michael:


Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.


In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”


I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.


That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms.


To be clear, I entirely agree with your admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” But, again, we can agree at least agree that such acclimation has happened regularly throughout human history, right?  What were the mechanics of that process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?


I know you agree that these questions are worthy of exploration, but I suppose where we might part ways is over the question of the metrics by which judge whether “the changes were inconsequential or benign.” Because I believe that while technological change often brings sweeping and quite consequential change, there is a value in the very act of living through it.


In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, however, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.


Even if you don’t agree with all of that, again, I would think you would find great value in studying the process by which such adaptation happens. And then we could argue about whether it was all really worth it! Alas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”


With all this in mind, let me suggest this friendly reformulation of your second recommendation: Tech writers should not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today. But how people and institutions learned to cope with those concerns is worthy of serious investigation. And what we learned from living through that process may be valuable in its own right.


I have been trying to sketch out an essay on all this entitled, “Muddling Through: Toward a Theory of Societal Adaptation to Disruptive Technologies.” I am borrowing that phrase (“muddling through”) from Joel Garreau, who used it in his book “Radical Evolution” when describing a third way of viewing humanity’s response to technological change. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.” That pretty much sums up my own perspective on things, but much study remains to be done on how that very messy process of “muddling through” works and whether we are left better off as a result. I remain optimistic that we do!


As always, I look forward to our continuing dialog over these interesting issues and I wish you all the best.


Cheers,


Adam Thierer

 •  0 comments  •  flag
Share on Twitter
Published on April 03, 2014 07:41

April 2, 2014

“Big Data” Inquiry Should Study Economics & Free Speech: TechFreedom urges reform of blanket surveillance and FTC processes

Monday, TechFreedom submitted comments urging the White House to apply economic thinking to its inquiry into “Big Data,” also pointing out that the worst abuses of data come not from the private sector, but government. The comments were in response to a request by the Office of Science and Technology Policy.


“On the benefits of Big Data, we urge OSTP to keep in mind two cautions. First, Big Data is merely another trend in an ongoing process of disruptive innovation that has characterized the Digital Revolution. Second, cost-benefit analyses generally, and especially in advance of evolving technologies, tend to operate in aggregates which can be useful for providing directional indications of future trade-offs, but should not be mistaken for anything more than that,” writes TF President Berin Szoka.


The comments also highlight the often-overlooked reality that data, big or small, is speech. Therefore, OSTP’s inquiry must address the First Amendment analysis. Historically, policymakers have ignored the First Amendment in regulating new technologies, from film to blogs to video games, but in 2011 the Supreme Court made clear in Sorrell v. IMS Health that data is a form of speech. Any regulation of Big Data should carefully define the government’s interest, narrowly tailor regulations to real problems, and look for less restrictive alternatives to regulation, such as user empowerment, transparency and education. Ultimately, academic debates over how to regulate Big Data are less important than how the Federal Trade Commission currently enforces existing consumer protection laws, a subject that is the focus of the ongoing FTC: Technology & Reform Project led by TechFreedom and the International Center for Law & Economics.


More important than the private sector’s use of Big Data is the government’s abuse of it, the group says, referring to the NSA’s mass surveillance programs and the Administration’s opposition to requiring warrants for searches of Americans’ emails and cloud data. Last December, TechFreedom and its allies garnered over 100,000 signatures on a WhiteHouse.gov petition for ECPA reform. While the Administration has found time to reply to frivolous petitions, such as asking for the construction of a Death Star, it has ignored this serious issue for over three months. Worse, the administration has done nothing to help promote ECPA reform and, instead, appears to be actively orchestrating opposition to it from theoretically independent regulatory agencies, which has stalled reform in the Senate.


“This stubborn opposition to sensible, bi-partisan privacy reform is outrageous and shameful, a hypocrisy outweighed only by the Administration’s defense of its blanket surveillance of ordinary Americans,” said Szoka. “It’s time for the Administration to stop dodging responsibility or trying to divert attention from the government-created problems by pointing its finger at the private sector, by demonizing private companies’ collection and use of data while the government continues to flaunt the Fourth Amendment.”


Szoka is available for comment at media@techfreedom.org. Read the full comments and see TechFreedom’s other work on ECPA reform.

 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2014 17:12

How to Privatize the Internet

Today on Capitol Hill, the House Energy and Commerce Committee is holding a hearing on the NTIA’s recent announcement that it will relinquish its small but important administrative role in the Internet’s domain name system. The announcement has alarmed some policymakers with a well-placed concern for the future of Internet freedom; hence the hearing. Tomorrow, I will be on a panel at ITIF discussing the IANA oversight transition, which promises to be a great discussion.


My general view is that if well executed, the transition of the DNS from government oversight to purely private control could actually help secure a measure of Internet freedom for another generation—but the transition is not without its potential pitfalls.


The NTIA’s technical administration of the DNS’ “root zone” is an artifact of the Internet’s origins as a U.S. military experiment. In 1989, the government began the process of privatizing the Internet by opening it up to general and commercial use. In 1998, the Commerce Department created ICANN to oversee the DNS on a day-to-day basis. The NTIA’s announcement is arguably the culmination of this single decades-long process of privatization.


The announcement also undercuts the primary justification used by authoritarian regimes to agitate for control of the Internet. Other governments have long cited the United States’ unilateral control of the root zone, arguing that they, too, should have roles in governing the Internet. By relinquishing its oversight of the DNS, the United States significantly undermines that argument and bolsters the case for private administration of the Internet.


The United States’ stewardship of the root zone is largely apolitical. This apolitical approach to DNS administration is precisely what is at stake during the transition, hence the three pitfalls the Obama administration must avoid to preserve it.


The first pitfall is the most serious but also the least likely to materialize. Despite the NTIA’s excellent track record, authoritarian regimes like Russia, China, and Iran have long lobbied for the ITU, a clumsy and heavily politicized U.N. technical agency, to take over the NTIA’s duties. In its announcement, the NTIA said it would not accept a proposal from an intergovernmental organization, a clear rebuke to the ITU.


Nevertheless, liberal governments would be wise to send the organization a clear message in the form of much-needed reform. The ITU should adopt the transparency we expect of communications standards bodies, and it should focus on its core competency—international coordination of radio spectrum—instead of on Internet governance. If the ITU resists these reforms at its Plenipotentiary Conference this fall, the United States and other countries should slash funding or quit the Union.


ICANN’s Governmental Advisory Committee (GAC) presents a second pitfall. Indeed, the GAC is already the source of much mischief. For example, France and Luxembourg objected to the creation of the .vin top-level domain on the grounds that “vin” (wine) is a regulated term in those countries. Brazil and Peru have held up Amazon.com’s application for .amazon despite the fact that they previously agreed to the list of reserved place names, and rivers and states were not on it. Last July, the U.S. government, reeling from the Edward Snowden revelations, threw Amazon and the rule of law under the bus at the GAC as a conciliatory measure.


ICANN created the GAC to appease other governments in light of the United States’ outsized role. Since the United States is giving up its special role, the case for the GAC is much diminished. In practice, the limits on the GAC’s power are gradually eroding. ICANN’s board seems increasingly hesitant to overrule it out of fear that governments will go back to the ITU and complain that the GAC “isn’t working.” As part of the transition of the root zone to ICANN, therefore, new limits need to be placed on the GAC’s power. Ideally, it would dissolve the GAC.


The third pitfall comes from ICANN itself. The organization is awash in cash from domain registration fees and new top-level domain name applications—which cost $185,000 each—and when the root zone transition is completed, it will face no external accountability. Long-time ICANN insiders speak of “mission creep,” noting that the supposedly purely technical organization increasingly deals with trademark policy and has aided police investigations in the past, a dangerous precedent.


How can we prevent an unaccountable, cash-rich technical organization from imposing its own internal politics on what is supposed to be an apolitical administrative role? In the long run, we may never be able to stop ICANN from becoming a government-like entity, which is why it is important to support research and experimentation in peer-to-peer, decentralized domain name systems. This matter is under discussion, among other places, at the Internet Engineering Task Force, which may ultimately play something of a counterweight to an independent ICANN.


Despite these potential pitfalls, it is time for an Internet that is fully in private hands. The Obama administration deserves credit for proposing to complete the privatization of the Internet, but we must also carefully monitor the process to intercept any blunders that might result in politicization of the root zone.

 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2014 08:52

America in the golden age of broadband

This blog was made in cooperation with Michael James Horney, George Mason University master’s student, based upon our upcoming paper on broadband innovation, investment and competition.


Ezra Klein’s interview with Susan Crawford paints a glowing picture of  publicly provided broadband, particularly fiber to the home (FTTH), but the interview missed a number of important points.


The international broadband comparisons provided were selective and unstandardized.  The US is much bigger and more expensive to cover than many small, highly populated countries. South Korea is the size of Minnesota but has 9 times the population. Essentially the same amount of network can be deployed and used by 9 times as many people. This makes the business case for fiber more cost effective.  However South Korea has limited economic growth to show for its fiber investment. A recent Korean government report complained of “jobless growth”.  The country still earns the bulk of its revenue from the industries from the pre-broadband days.


It is more realistic and correct to compare the US to the European Union, which has a comparable population and geographic areas.  Data from America’s National Broadband Map and the EU Digital Agenda Scoreboard show that  the US exceeds the EU on many important broadband measures, including the deployment of fiber to the home (FTTH), which is twice the rate of EU.  Considering where fiber networks are available in the EU, the overall adoption rate is just 2%.  The EU government itself, as part of its Digital Single Market initiative, has recognized that its approach to broadband has not worked and is now looking to the American model.


The assertion that Americans are “stuck” with cable as the only provider of broadband is false.  It is more correct to say that Europeans are “stuck” with DSL, as 74% of all EU broadband connections are delivered on copper networks. Indeed broadband and cable together account for 70% of America’s broadband connections, with the growing 30% comprising FTTH, wireless, and other  broadband solutions.  In fact, the US buys and lays more fiber than all of the EU combined.


The reality is that Europeans are “stuck” with a tortured regulatory approach to broadband, which disincentivizes investment in next generation networks. As data from Infonetics show, a decade ago the EU accounted for one-third of the world’s investment in broadband; that amount has plummeted to less than one-fifth today. Meanwhile American broadband providers invest at twice the rate of European and account for a quarter of the world’s outlay in communication networks. Americans are just 4% of the world’s population, but enjoy one quarter of its broadband investment.


The following chart illustrates the intermodal competition between different types of broadband networks (cable, fiber, DSL, mobile, satellite, wifi) in the US and EU.








US (%)




EU (%)





Availability of broadband with a download speed of 100 Mbps or higher

57*




30





Availability of cable broadband

88




42





Availability of LTE

94**




26





Availability of FTTH

25




12





Percent of population that subscribes to broadband by DSL

34




74





Percent of households that subscribe to broadband by cable

36***




17







 


The interview offered some cherry picked examples, particularly Stockholm as the FTTH utopia. The story behind this city is more complex and costly than presented.  Some $800 million has been invested in FTTH in Stockholm to date with an additional $38 million each year.  Subscribers purchase the fiber broadband with a combination of monthly access fees and increases to municipal fees assessed on homes and apartments. Acreo, a state-owned consulting company charged with assessing Sweden’s fiber project concludes that the FTTH project shows at best a ”weak but statistically significant correlation between fiber and employment” and that ”it is difficult to estimate the value of FTTH for end users in dollars and some of the effects may show up later.”


Next door Denmark took a different approach.  In 2005, 14 utility companies in Denmark invested $2 billion in FTTH.  With advanced cable and fiber networks, 70% of Denmark’s households and businesses has access to ultra-fast broadband, but less than 1 percent subscribe to the 100 mbps service.  The utility companies have just 250,000 broadband customers combined, and most customers subscribe to the tiers below 100 mbps because it satisfies their needs and budget. Indeed 80% of the broadband subscriptions in Denmark are below 30 mbps.  About 20 percent of homes and businesses subscribe to 30 mbps, but more than two-thirds subscribe to 10 mbps.


Meanwhile, LTE mobile networks have been rolled out, and already 7 percent (350,000) of Danes use 3G/4G as their primary broadband connection, surpassing FTTH customers by 100,000.  This is particularly important because in many sectors of the Danish economy, including banking, health, and government, users can only access services only digitally. Services are fully functional on mobile devices and their associated speeds.  The interview claims that wireless will never be a substitute for fiber, but millions of people around the world are proving that wrong every day.


The price comparisons provided between the US and selected European countries also leave out compulsory media license fees (to cover state broadcasting) and taxes that can add some $80 per month to the cost of every broadband subscription. When these real fees are added up, the real price of broadband is not so cheap in Sweden and other European countries.  Indeed, the US frequently comes out less expensive.


The US broadband approach has a number of advantages.  Private providers bear the risks, not taxpayers. Consumers dictate the broadband they want, not the government.  Also prices are scalable and transparent. The price reflects the real cost. Furthermore, as the OECD and the ITU have recognized, the entry level costs for broadband in the US are some of the lowest in the world. The ITU recommends that people pay no more than 5% of their income for broadband; most developed countries fall within 2-3% for the highest tier of broadband, including the US.  It is only fair to pay more more for better quality. If your needs are just email and web browsing, then basic broadband will do. But if you wants high definition Netflix, you should pay more.  There is no reason why your neighbor should subsidize your entertainment choices.


The interview asserted that government investment in FTTH is needed to increase competitiveness, but there was no evidence given.  It’s not just a broadband network that creates economic growth. Broadband is just one input in a complex economic equation.  To put things into perspective, consider that the US has transformed its economy through broadband in the last two decades.   Just the internet portion alone of America’s economy is larger than the entire GDP of Sweden.


The assertion that the US is #26 in broadband speed is simply wrong. This is an outdated statistic from 2009 used in Crawford’s book. The Akamai report references is released quarterly, so there should have been no reason not to include a more recent figure in time for publication in December 2012. Today the US ranks #8 in the world for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from #26 to #8. In any case, according to Akamai, many US cities and states have some of the fastest download speeds in the world and would rank in the top ten in the world.


There is no doubt that fiber is an important technology and the foundation of all modern broadband networks, but the economic question is to what extent should fiber be brought to every household, given the cost of deployment (many thousands of dollars per household), the low level of adoption (it is difficult to get a critical mass of a community to subscribe given diverse needs), and that other broadband technologies continue to improve speed and price.


The interview didn’t mention the many failed federal and municipal broadband projects.  Chattanooga is just one example of a federally funded fiber projects costing hundreds of millions of dollars with too few users  A number of municipal projects that have failed to meet expectations include Chicago, Burlington, VT; Monticello, MN; Oregon’s MINET, and Utah’s UTOPIA.


Before deploying costly FTTH networks, the feasibility to improve existing DSL and cable networks as well as to deploy wireless broadband markets should be considered. As case in point is Canada.  The OECD reports that both Canada and South Korea have essentially the same advertised speeds, 68.33 and 66.83 Mbps respectively.  Canada’s fixed broadband subscriptions are shared almost equally between DSL and cable, with very little FTTH.   This shows that fast speeds are possible on different kinds of networks.


The future demands a multitude of broadband technologies. There is no one technology that is right for everyone. Consumers should have the ability to choose based upon their needs and budget, not be saddled with yet more taxes from misguided politicians and policymakers.


Consider that mobile broadband is growing at four times the rate of fixed broadband according to the OECD, and there are some 300 million mobile broadband subscriptions in the US, three times as many fixed broadband subscriptions.  In Africa mobile broadband is growing at 50 times the rate of fixed broadband.  Many Americans have selected mobile as their only broadband connection and love its speed and flexibility. Vectoring on copper wires enables speeds of 100 mbps. Cable DOCSIS3 enables speeds of 300 mbps, and cable companies are deploying neighborhood wifi solutions.  With all the innovation and competition, it is mindless to create a new government monopoly.  We should let the golden age of broadband flourish.





Source for US and EU Broadband Comparisons: US data from National Broadband Map, “Access to Broadband Technology by Speed,” Broadband Statistics Report, July 2013, http://www.broadbandmap.gov/download/Technology%20by%20Speed.pdf and http://www.broadbandmap.gov/summarize/nationwide . EU data from European Commission, “Chapter 2: Broadband Markets,” Digital Agenda Scoreboard 2013 (working document, December 6, 2013), http://ec.europa.eu/digital-agenda/sites/digital-agenda/files/DAE%20SCOREBOARD%202013%20-%202-BROADBAND%20MARKETS%20_0.pdf .


*The National Cable Telecommunications Association suggests speeds of 100 Mbps are available to 85% of Americans.  See “America’s Internet Leadership,” 2013, www.ncta.com/positions/americas-internet-leadership .




**Verizon’s most recent report notes that it reaches 97 percent of America’s population with 4G/LTE networks. See Verizon, News Center: LTE Information Center, “Overview,” www.verizonwireless.com/news/LTE/Overview.html .


***This figure is based on 49,310,131 cable subscribers at the end of 2013, noted by Leichtman Research http://www.leichtmanresearch.com/press/031714release.html compared to 138,505,691 households noted by the National Broadband Map.

 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2014 08:20

Bitcoin hearing in the House today, fun event tonight

Later today I’ll be testifying at a hearing before the House Small Business Committee titled “Bitcoin: Examining the Benefits and Risks for Small Business.” It will be live streamed starting at 1 p.m. My testimony will be available on the Mercatus website at that time, but below is some of my work on Bitcoin in case you’re new to the issue.


Also, tonight I’ll be speaking at a great event hosted by the DC FinTech meetup on “Bitcoin & the Internet of Money.” I’ll be joined by Bitcoin core developer Jeff Garzik and we’ll be interviewed on stage by Joe Weisenthal of Business Insider. It’s open to the public, but you have to RSVP.


Finally, stay tuned because in the next couple of days my colleagues Houman Shadab, Andrea Castillo, and I will be posting a draft of our new law review article looking at Bitcoin derivatives, prediction markets, and gambling. Bitcoin is the most fascinating issue I’ve ever worked on.


Here’s Some Bitcoin Reading…

Bitcoin: A Primer for Policymakers (with Andrea Castillo)
Regulators Need to Take It Easy on Bitcoin Startups, WIRED, March 6, 2014.
Bitcoin: More than Money, Reason, December 2013.
US regulations are hampering Bitcoin’s growth, The Guardian, November 18, 2013.
Why Regulators Should Embrace Bitcoin, American Banker, August 21, 2013.
And here are all the posts about Bitcoin on Tech Liberation from myself, Jim Harper, and Eli Dourado

And here’s my interview with Reihan Salam discussing Bitcoin…


 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2014 07:15

Video – DisCo Policy Forum Panel on Privacy & Innovation in the 21st Century

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:



benefit-cost analysis in digital privacy debates (building on this law review article);
the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
the problem of “technopanics” in information policy debates (building on this law review article);
the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:


 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2014 06:32

April 1, 2014

Congress Should Lead FCC by Example, Adopt Clean STELA Reauthorization

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry.


Yesterday’s FCC meeting was unabashedly pro-cable and anti-broadcaster. The agency decided to prohibit television broadcasters from engaging in the same industry behavior as cable, satellite, and telco television distributors and programmers. The resulting disparity in regulatory treatment highlights the inherent dangers in addressing regulatory reform piecemeal rather than comprehensively as contemplated by the #CommActUpdate. Congress should lead the FCC by example and adopt a “clean” approach to STELA reauthorization that avoids the agency’s regulatory mistakes.


The FCC meeting offered a study in the way policymakers pick winners and losers in the marketplace without acknowledging unfair regulatory treatment. It’s a three-step process.



First, the policymaker obfuscates similarities among issues by referring to substantively similar economic activity across multiple industry segments using different terminology.
Second, it artificially narrows the issues by limiting any regulatory inquiry to the disfavored industry segment only.
Third, it adopts disparate regulations applicable to the disfavored industry segment only while claiming the unfair regulatory treatment benefits consumers.

The broadcast items adopted by the FCC yesterday hit all three points.


“Broadcast JSAs”


The FCC adopted an order prohibiting two broadcast television stations from agreeing to jointly sell more than 15% of their advertising time using the three-step process described above.



First, the FCC referred to these agreements as “JSA’s” or “joint sales agreements”.
Second, the FCC prohibited these agreements only among broadcast television stations even though the largest cable, satellite, and telco video distributors sell their advertising time through a single entity.
Third, FCC Chairman Tom Wheeler said all the agency was “doing [yesterday was] leveling the negotiating table” for negotiations involving the largely unrelated issue of “retransmission consent”, even though the largest cable, satellite, and telco video distributors all sell their advertising through a single entity.

If the FCC had acknowledged that cable, satellite, and telcos jointly sell their advertising, and had the FCC included them in its inquiry as well, Chairman Wheeler could not have kept a straight face while asserting that all the agency was doing was leveling the playing field. Hence the power of obfuscatory terminology and artificially narrowed issues.


“Broadcast Exclusivity Agreements”


The FCC also issued a further notice yesterday seeking comment on broadcast “non-duplication exclusivity agreements” and “syndicated exclusivity agreements.” These agreements, which are collectively referred to as “broadcast exclusivity agreements”, are a form of territorial exclusivity: They provide a local television station with the exclusive right to transmit broadcast network or syndicated programming in the station’s local market only.


Unlike cable, satellite, and telco television distributors, broadcast television stations are prohibited by law from entering into exclusive programming agreements with other television distributors in the same market: The Satellite Television Extension and Localism Act (STELA) prohibits television stations from entering into exclusive retransmission consent agreements — i.e., a television station must make its programming available to all other television distributors in the same market. Cable, satellite, and telco distributors are legally permitted to enter into exclusive programming agreements on a nationwide basis — e.g., DIRECTV’s NFL Sunday Ticket.


If the FCC is concerned by the limited form of territorial exclusivity permitted for broadcasters, it should be even more concerned about the broader exclusivity agreements that have always been permitted for cable, satellite, and telco television distributors. But the FCC nevertheless used the three-step process for picking winners and losers to limit its consideration of exclusive programming agreements to broadcasters only.



First, the FCC uses unique terminology to refer to “broadcast” exclusivity agreements (i.e., “non-duplication” and “syndicated exclusivity”), which obfuscates the fact that these agreements are a limited form of exclusive programming agreements.
Second, the FCC is seeking comment on exclusive programming agreements between broadcast television stations and programmers only even though satellite and other video programming distributors have entered into exclusive programming agreements.
Third, it appears the pretext for limiting the scope of the FCC’s inquiry to broadcasters will again be “leveling the playing field” between broadcasters and other television distributors — to benefit consumers, of course.

“Joint Retransmission Consent Negotiations”


Finally, the FCC prohibited a television broadcast station ranked among the top four stations (as measured by audience share) from negotiating “retransmission consent” jointly with another top four station in the same market if the stations are not commonly owned. The FCC reasoned that “the threat of losing programming of two more top four stations at the same time gives the stations undue bargaining leverage in negotiations with [cable, satellite, and telco television distributors].”


As an economic matter, “retransmission consent” is essentially a substitute for the free market copyright negotiations that could occur absent the “compulsory copyright license” in the 1976 Copyright Act and an earlier Supreme Court decision interpreting the term “public performance”. In the absence of retransmission consent, compensation for the use of programming provided by broadcast television stations and programming networks would be limited to the artificially low amounts provided by the compulsory copyright license.


To the extent retransmission consent is merely another form of program licensing, it is indistinguishable from negotiations between cable, satellite and telco distributors and cable programming networks — which typically involve the sale of bundled channels. If bundling two television channels together “gives the stations undue bargaining leverage” in retransmission consent negotiations, why doesn’t a cable network’s bundling of multiple channels together for sale to a cable, satellite, or telco provider give the cable network “undue bargaining leverage” in its licensing negotiations? The FCC avoided this difficultly using the old one, two, three approach.



First, the FCC used the unique term “retransmission consent” to refer to the sale of programming rights by broadcasters.
Second, the FCC instituted a proceeding seeking comment only on “retransmission consent” rather than all programming negotiations.
Third, the FCC found that lowering retransmission consent costs could lower the prices consumers pay to cable, satellite, and telco television distributors — to remind us that it’s all about consumers, not competitors.

If it were really about lowering prices for consumers, the FCC would also have considered whether prohibiting channel bundling by cable programming networks would lower consumer prices too. For reasons left unexplained, cable programmers are permitted to bundle as many channels as possible in their licensing negotiations.


“Clean STELA”


After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry. To be sure, the disparate results of yesterday’s FCC meeting could be unintentional. But, even so, they highlight the inherent dangers in any piecemeal approach to industry regulation. That’s why Congress should adopt a “clean” approach to STELA reauthorization and reject the demands of special interests for additional piecemeal legislative changes. Consumers would be better served by a more comprehensive effort to update video regulations.

 •  0 comments  •  flag
Share on Twitter
Published on April 01, 2014 08:31

March 28, 2014

The Beneficial Uses of Private Drones [Video]

Give us our drone-delivered beer!


That’s how the conversation got started between John Stossel and me on his show this week. I appeared on Stossel’s Fox Business TV show to discuss the many beneficial uses of private drones. The problem is that drones — which are more appropriately called unmanned aircraft systems — have an image problem. When we think about drones today, they often conjure up images of nefarious military machines dealing death and destruction from above in a far-off land. And certainly plenty of that happens today (far, far too much in my personal opinion, but that’s a rant best left for another day!).


But any technology can be put to both good and bad uses, and drones are merely the latest in a long list of “dual-use technologies,” which have both military uses and peaceful private uses. Other examples of dual-use technologies include: automobiles, airplanes, ships, rockets and propulsion systems, chemicals, computers and electronic systems, lasers, sensors, and so on. Put simply, almost any technology that can be used to wage war can also be used to wage peace and commerce. And that’s equally true for drones, which come in many sizes and have many peaceful, non-military uses. Thus, it would be wrong to judge them based upon their early military history or how they are currently perceived. (After all, let’s not forget that the Internet’s early origins were militaristic in character, too!)


Some of the other beneficial uses and applications of unmanned aircraft systems include: agricultural (crop inspection & management, surveying); environmental (geological, forest management, tornado & hurricane research); industrial (site & service inspection, surveying); infrastructure management (traffic and accident monitoring); public safety (search & rescue, post-natural disaster services, other law enforcement); and delivery services (goods & parcels, food & beverages, flowers, medicines, etc.), just to name a few.




Watch the latest video at video.foxbusiness.com

This is why it is troubling that the Federal Aviation Administration (FAA) continues to threaten private drone operators with cease-and-desist letters and discourage the many beneficial uses of these technologies, even as other countries rush ahead and green-light private drone services. As I noted on the Stossel show, while the FAA is well-intentioned in its efforts to keep the nation’s skies safe, the agency is allowing hypothetical worst-case scenarios get in the way of beneficial innovation. A lot of this fear is driven by privacy concerns, too. But as Brookings Institution senior fellow John Villasenor has explained, we need to be careful about rushing to preemptively control new technologies based on hypothetical privacy fears:


If, in 1995, comprehensive legislation to protect Internet privacy had been enacted, it would have utterly failed to anticipate the complexities that arose after the turn of the century with the growth of social networking and location-based wireless services. The Internet has proven useful and valuable in ways that were difficult to imagine over a decade and a half ago, and it has created privacy challenges that were equally difficult to imagine. Legislative initiatives in the mid-1990s to heavily regulate the Internet in the name of privacy would likely have impeded its growth while also failing to address the more complex privacy issues that arose years later.


This is a key theme discussed throughout my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” The central lesson of the booklet is that living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. We shouldn’t let our initial (and often irrational) fears of new technologies dictate the future course of innovation.We can and will find constructive solutions to the hard problems posed by new technologies because we creative and resilient creatures. And, yes, some regulation will be necessary. But how and when we regulate matters profoundly. Preemptive, precautionary-based proposals are almost never the best way to start.


Finally, as I also noted during the interview with Stossel, it’s always important to consider trade-offs and opportunity costs when discussing the disruptive impact of new technologies. For example, while some fear the safety implications of private drones, we should not forget that over 30,000 people die in automobile-related accidents every year in the United States. While the number of vehicle-related deaths has been declining in recent years, that remains an astonishing number of deaths. What if a new technology existed that could help prevent a significant number of these fatalities? Certainly, “smart car” technology and fully autonomous “driverless cars” should help bring down that number significantly. But how might drones help?


Consider some of the mundane tasks that automobiles are used for today. Cars are used to go grab dinner or have someone else deliver it, to pick up medicine at a local pharmacy, to have newspapers or flowers delivered, and so on. Every time a human gets behind the wheel of an automobile to do these things the chance for injury or even death exists, even close to home. In fact, a large percentage of all accidents happen with just a few miles of the car owner’s home. A significant number of those accidents could be avoided if we were able to rely on drone-delivery of things we today use cars and trucks for.


These are just some of the things to consider as the debate over unmanned aircraft systems continues. Drones have gotten a very bad name thus far, but we should remain open-minded about their many beneficial, peaceful, and pro-consumer uses.


(For more on this issue, read this April 2013 filing to the FAA I wrote along with my Mercatus colleagues Eli Dourado and Jerry Brito.)


 •  0 comments  •  flag
Share on Twitter
Published on March 28, 2014 09:10

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.