Adam Thierer's Blog, page 53

November 18, 2013

FCC Chairman Wheeler Signals Pro-Investment Approach to Communications Regulation

From the time Tom Wheeler was nominated to become the next FCC Chairman, many have wondered, “What would Wheeler do?” Though it is still early in his chairmanship, the only ruling issued in Chairman Wheeler’s first meeting signals a pro-investment approach to communications regulation.


The declaratory ruling clarified that the FCC would evaluate foreign investment in broadcast licensees that exceeds the 25 percent statutory benchmark using its existing analytical framework. It had previously been unclear whether broadcasters were subject to the same standard as other segments of the communications industry. The ruling recognized that providing broadcasters with regulatory certainty in this respect would promote investment and that greater investment yields greater innovation.


The FCC’s decision to apply the same standards for reviewing foreign ownership of broadcasters as it applies to other segments of the communications industry is very encouraging. It affirms the watershed policy decisions in the USF/ICC Transformation Order, in which the FCC concluded that “leveling the playing field” promotes competition whereas implied subsidies deter investment and are “unfair for consumers.”


Chairman Wheeler’s separate statement is also very encouraging. Its first sentence declares that, “Promoting a regulatory framework that does not inhibit the flow of capital to the US communications sector is an important goal of Commission policy.” This Chairman understands that, in a global economy, U.S. companies must compete with innovators around the world to obtain the necessary investment to develop new information technologies and deploy new communications infrastructure. His separate statement indicates the Chairman’s intent to renew the FCC’s commitment to encouraging private investment.


Regrettably, the Chairman’s separate statement is potentially troubling as well. After noting that the broadcast incentive auction is intended to allow the market to assure that the spectrum is put to its highest and best use, Chairman Wheeler says he will “assess foreign ownership petitions and applications by looking at, among other factors, whether they will help to fulfill these goals, including efficient spectrum usage.”


It is not entirely clear what the Chairman meant by this non sequitur (would the FCC impose channel sharing conditions on stations seeking approval for foreign investment exceeding the benchmark?). But it indicates a willingness to use the FCC’s authority over mergers and acquisitions to promote unrelated policy goals through the imposition of unrelated conditions. As I’ve noted previously, using the FCC’s transaction authority in this way silences public debate over critical policy issues and shields the resulting decision from judicial review – due process protections that are essential to ensure that the FCC acts in the public interest. Ironically, the prospect of unpredictable, case-by-case conditions on foreign investment would appear to be at odds with the Chairman’s goal of promoting a regulatory framework that doesn’t inhibit the flow of private capital to the U.S. communications industry.


It is also possible that the Chairman was merely attempting to deter speculative investments in broadcast spectrum that could sabotage the incentive auction. The success of the incentive auction is critical to the future of our mobile broadband ecosystem, and it is appropriate that the FCC be mindful of sudden, significant foreign investments in broadcast spectrum in these circumstances.


It is still early in Wheeler’s chairmanship, and the future is bright in the spring. If the Chairman maintains his focus on pro-investment policies during his term, the future could be brighter in every season.


 •  0 comments  •  flag
Share on Twitter
Published on November 18, 2013 05:42

H Block Spectrum Highlights Risk of No Shows at FCC Incentive Auction

I recently prepared a paper for the Expanding Opportunities for Broadcasters Coalition and Consumer Electronics Association that provides empirical data regarding the costs of restricting the eligibility of large firms to participate in FCC spectrum auctions (available in PDF here). The paper demonstrates that there is no significant likelihood that an open incentive auction would substantially harm the competitive positions of Sprint and T-Mobile. It also demonstrates that Sprint and T-Mobile have incentives to constrain the ability of Verizon and AT&T to expand their network capacity, and that Sprint and T-Mobile could consider FCC restraints on their primary rivals a “win” even if Sprint and T-Mobile don’t place a single bid in the incentive auction. (Winning regulatory battles is a lot cheaper than winning spectrum in a competitive auction.)


Some might think it is implausible that Sprint or T-Mobile would decide to forgo participation in the incentive auction. However, the recent announcement by Sprint that it won’t compete in the H block auction highlights the difficulty in predicting accurately whether any particular company will participate in a particular auction. Sprint’s announcement stunned market analysts, who had considered Sprint a key contender for the H block spectrum. Until recently, Sprint had given every indication it was keen to acquire this spectrum, which is located directly adjacent to the nationwide G block that Sprint already owns. It participated heavily in the FCC’s service rules proceeding for the H block (WT Docket No. 12-357) and even conducted its own testing to assist the FCC in assessing the technical issues. But, by the time the H Block auction was actually announced, Sprint decided its business would be better served by focusing its efforts on the deployment of its trove of spectrum in the 2.5 GHz band.


Such reversals are not unusual during the FCC auction process. Frontline Wireless, a company that no longer exists, successfully persuaded the FCC that it would build a nationwide, interoperable public safety network in the 700 MHz band, if the FCC imposed a public/private partnership condition on the D Block. But, shortly before the auction was scheduled to start, Frontline announced that it had been unable to obtain sufficient financing, and as a result, the D block was never sold.


To be clear, I’m not suggesting that Sprint or Frontline acted deceitfully in seeking spectrum rules they considered favorable to their interests without actually participating in the resulting auction. My point is that there is a critical distinction between regulatory efforts and business decisions. Companies often participate in regulatory proceedings to optimize their potential business options, but the results they seek are just that – options – until a business decision must be made.


This distinction leads to another important point: It is impossible for the FCC to predict accurately the ultimate business decisions of multiple independent companies whose particular business plans and the circumstances determining them are unknown to the FCC or anybody else. A particular company often cannot accurately predict its own decisions in rapidly changing circumstances (e.g., when Frontline was lobbying the FCC, it could not know with certainty that it would obtain the financing it required to buy the D Block). This inherent uncertainty is why the discredited licensing methodology of comparative hearings failed. It required the FCC to make reliable predictive judgments about the needs and efficiency of potential spectrum users, which proved to be an impossible task.


Ironically, the bidding restrictions proposed for the incentive auction are a form of “comparative hearing lite”. The DOJ’s recommendation – that the FCC “ensure” that Sprint and T-Mobile win spectrum in the incentive auction – is based on its own predictive judgments regarding the relative spectrum needs of all four nationwide mobile providers and their willingness to use future spectrum resources efficiently. Of course, there is no reason to believe that the DOJ is capable of judging such matters more reliably than the FCC did during the era of comparative hearings. As the H and D Block auctions demonstrate, it is impossible for the DOJ to know whether Sprint and T-Mobile will even show up to participate in the incentive auction.


 •  0 comments  •  flag
Share on Twitter
Published on November 18, 2013 05:13

November 15, 2013

Yes, Net Neutrality is a Dead Man Walking. We Already Have a Fast Lane.

“Net neutrality is a dead man walking,” Marvin Ammori stated in Wired last week, citing the probable demise of the FCC’s Open Internet rules in court. I’d agree for a different reason. Net neutrality has been dead ever since the FCC released its net neutrality order in December 2010. (This is not to say the damaging rules should be upheld by the DC Circuit. For many reasons, the Order should be struck down.) I agree with Ammori because we already have the Internet “fast lane” many net neutrality proponents wanted to prevent. Since that goal is precluded, all the rules do is hang Damocles’ Sword over ISPs regarding traffic management.


The 2010 rules managed to make both sides unhappy. The ISPs face severe penalties if three FCC commissioners believe ISP network management practices “unreasonably discriminate” against certain traffic. Public interest groups, on the other hand, were dissatisfied because they wanted ISPs reclassified as common carriers to prevent deep-pocketed content creators from allying with ISPs to create an Internet “fast lane” for some companies, relegating most other websites to the so-called “winding dirt road” of the public Internet.


Proponents emphasize different goals of net neutrality (to the point–many argue–it’s hard to discern what the term means). But if preventing the creation of a fast lane is the main goal of net neutrality, it’s dead already. Consider two popularly-cited net neutrality “violations” that do not violate the Open Internet Order: Netflix’ Open Connect program and Comcast not subjecting its Xfinity video-on-demand (VOD) service to data limits


Both cases involve the creation of a fast lane for certain content and activists rail against them. Both cases also involve network practices expressly exempted from net neutrality regulations. The FCC exempted these sorts of services because they are important, benefit the public, and should be encouraged. With Open Connect, Netflix scatters its many servers across the country closer to households, which allows its content to stream at a higher quality than most other video sites. Comcast gives its Xfinity VOD fast-lane treatment as well, which is completely legal since VOD from a cable company is a “specialized service” exempt from the rules.


“Specialized service” needs some explanation since it’s a novel concept from the FCC order. The net neutrality rules distinguish between “broadband Internet access service” (BIAS)–to which the regulations apply–and specialized (or managed) services–to which they don’t apply. The exemption of specialized services opens up a dangerous loophole in the view of proponents.


BIAS is what most consider “the Internet.” It’s the everyday websites we access on our computers and smartphones. What are specialized services? In the sleepy month of August the FCC’s Open Internet Advisory Committee released its report on what criteria specialized service needs to meet to be exempt from net neutrality scrutiny (these are influential and advisory, but not binding):


1. The service doesn’t reach large parts of the Internet, and

2. The service is an “application level” service.


The Advisory Committee also thought that “capacity isolation” is a good indicator that a service should be exempt. With capacity isolation, the ISP has one broadband connection going to the home but is separating the service’s data stream from the conventional Internet stream consumers use to visit Facebook, YouTube, and the like. This is how Comcast’s streaming of Xfinity to Xboxes is exempt–it is a proprietary network going into the home. As long as carriers don’t divert BIAS capacity for the application, the FCC will likely turn a blind eye.


What are some examples? Specialized service is marked by higher-quality streams that typically don’t suffer from jitter and latency. If you have “digital voice” from Comcast, for example, you are receiving a specialized service–proprietary VoIP. Specialized service can also include data streams like VOD, e-reader downloads, heart monitor data, and gaming services. The FCC exempted these because some are important enough that they shouldn’t compete with BIAS Internet. It would be obviously damaging to have digital phone service or health monitors getting disrupted because others are checking up on their fantasy football team. The FCC also wanted to spur investment in specialized services and video companies like Netflix are considering pairing up with ISPs to deliver a better experience to customers.


That is to say, the net neutrality effort has failed even worse than most realize. The FCC essentially prohibited innovative traffic management in BIAS, freezing that service into common-carrier-like status. Further, we have an Internet fast lane (which I consider a significant public benefit, though net neutrality proponents do not). As business models evolve and the costs of server networks fall, our two-tier system will become more apparent.


 •  0 comments  •  flag
Share on Twitter
Published on November 15, 2013 12:34

November 14, 2013

New paper on the FTC’s dangerously broad Section 5 authority

The following is a guest post by James C. Cooper of George Mason University School of Law.


What are the limits to the FTC’s Section 5 antitrust authority? The short answer is, who knows. The FTC has been on a 100-year quest to find the maleficence that it alone was meant to combat. Early in its history, the Supreme Court appeared to give the FTC license to challenge a wide range of conduct that had little to do with competition. A series of appellate setbacks in the 1980s – relating largely to claims that Section 5 could reach tacit collusion and oligopolistic interdependence – led the Commission to retrench. Since then, the FTC has avoided litigating a Section 5 case, focusing primarily on invitations to collude (ITCs), and breaches of agreements to disclose or to license standard essential patents. Of course since all of these cases have settled, no court has had to opportunity to weigh in on whether Congress meant Section 5 to cover this type of conduct.


In my new Mercatus Center working paper, The Perils of Excessive Discretion: The Elusive Meaning of Unfairness in Section 5 of the FTC Act, I argue that the undefined nature of Section 5 leaves the FTC with broad discretion to investigate and extract settlements from companies. Although the appellate rebukes of the 1980s provide some clear boundaries, given firms’ understandable aversion to litigation – especially when only injunctive relief is on the table, and when the risk of follow-on private suits is much lower than it would be under a Sherman Act settlement – there is still a relatively large zone in which the FTC can develop this quasi Section 5 common law with little fear of triggering litigation, which would lead to appellate review. (A similar problem exists with respect to the FTC’s use of its Section 5 authority to become the de facto national privacy and data security regulator, but that’s another post).



Some commissioners saw the Google case as a perfect vehicle for the elusive “stand alone Section 5 case.” But rather than clarifying things, the Commission left a muddle. Although the Commission eventually decided to close its investigation, the multiple statements accompanying this decision suggest several directions in which some commissioners were willing to take Section 5, without offering any coherent framework or limits, revealing the truly confused nature of Section 5 and the concomitant wide discretion that the Commission enjoys to determine what Section 5 covers.


So what are the costs of so much discretion in the hands of the FTC? Uncertainty and rent seeking. Businesses uncertain about where the line between illegality and legality rests may be tempted to pull their competitive punches to limit the risk of an FTC investigation. Further, because defining what constitutes an “unfair method of competition” is so subjective an exercise, firms rationally devote resources to curry favor with those who reside at 600 Pennsylvania Avenue. One only need to look at the well-documented lobbying fest – both by Google and its opponents – that accompanied the Google investigation. This diversion of resources from productive to redistributive use may be a boon for private lawyers and economists, but it’s bad for consumers.


What are the answers? Probably the best course would be for the FTC – or Congress –permanently to tether Section 5 to the Sherman Act. Section 5 may have had a role to play very early in its history to the extent that the Sherman Act was too narrowly construed, but we don’t have that problem any more. Even after cases like Trinko, Twombly, and Credit Suisse, the Sherman Act is capacious, fully capable of accommodating conduct that threatens competition. True, this path would leave breaches of FRAND commitments and ITCs involving small firms beyond the FTC’s bailiwick. But the costs of ignoring such conduct are likely to be low. Breaches of FRAND commitments are at base contract disputes between sophisticated parties, and its not as if there is a lack of institutions to deal with this problem: courts have shown themselves able to wade into these complex issues. ITCs can be harmful, but only when the invitation blossoms into an agreement, in which case it can be challenged under the Sherman Act.


Another course would be for the Commission to issue guidelines. This path – the unfairness, deception, and ad substantiation statements – did wonders for the legitimacy of the FTC’s consumer protection program in the 1980s. What should Section 5 guidelines look like? They should proscribe a narrow domain, focusing only on conduct that (1) clearly is harmful (or poses a significant threat of substantial harm) to consumers through its effect on competition, (2) is unlikely to generate any cognizable efficiencies, and (3) but for the application of Section 5, would remain unremedied. In practice, this would mean retaining only ITCs and certain information sharing by non-dominant firms that is likely to facilitate collusion.


The economic sophistication of antitrust jurisprudence has progressed light years since the last Supreme Court case involving a Section 5 claim, during the era of Schwinn and Utah Pie. Maybe the fact that the Commission and antitrust commentators have searched so hard and so long for the elusive conduct that Section 5 alone was designed to tackle is a signal that such conduct does not exist. Perhaps Section 5 should go the way of the Robinson-Patman Act, another antitrust statute of a similar vintage that has been overtaken by economics to the point that neither the FTC nor the Antitrust Division enforces it.


 •  0 comments  •  flag
Share on Twitter
Published on November 14, 2013 12:45

Problematic “Do Not Track Kids” Bill Reintroduced

Sen. Edward J. Markey (D-Mass.) and Rep. Joe Barton (R-Texas) have reintroduced their “Do Not Track Kids Act,” which, according to this press release, “amends the historic Children’s Online Privacy Protection Act of 1998 (COPPA), will extend, enhance and update the provisions relating to the collection, use and disclosure of children’s personal information and establishes new protections for personal information of children and teens.” I quickly scanned the new bill and it looks very similar to their previous bill of the same name that they introduced in 2011 and which I wrote about here and then critiqued at much greater length in a subsequent Mercatus Center working paper (“Kids, Privacy, Free Speech & the Internet: Finding The Right Balance”).


Since not much appears to have changed, I would just encourage you to check out my old working paper for a discussion of why this legislation raises a variety of technical and constitutional issues. But I remain perplexed by how supporters of this bill think they can devise age-stratified online privacy protections without requiring full-blown age verification for all Internet users. And once you go down that path, as I note in my paper, you open up a huge Pandora’s Box of problems that we have already grappled with for many years now. As I noted in my paper, the real irony here is that the “problem with these efforts is that expanding COPPA would require the collection of more personal information about kids and parents. For age verification to be effective at the scale of the Internet, the collection of massive amounts of additional data is necessary.”


But that’s hardly the only problem. How about the free speech rights of teens? They do have some, after all, but this bill could create new limitations on their ability to freely surf the Internet, gather information, and communicate with others.


In the end, I don’t expect this bill to pass; it’s mostly just political grandstanding “for the children.” But it’s a real shame that smart people waste their time with counter-productive and constitutionally suspect measures such as these instead of focusing their energy on more constructive educational efforts and awareness-building approaches to online safety and privacy concerns. Again, read my paper for more details on that alternative approach to these issues.


 •  0 comments  •  flag
Share on Twitter
Published on November 14, 2013 12:27

November 13, 2013

A Nice Illustration of The Law of Disruption in Action

My friend and frequent co-blogger Larry Downes has shown how lawmaking in the information age is inexorably governed by “The Law of Disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.” This law is “a simple but unavoidable principle of modern life,” he said, and it will have profound implications for the way businesses, government, and culture evolve going forward. “As the gap between the old world and the new gets wider,” he argues, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.” This has profound ramifications for high-tech policymaking, or at least it should.


A powerful illustration of the Law of Disruption in action comes from this cautionary tale told by telecom attorney Jonathan Askin in his new essay, “A Remedy to Clueless Tech Lawyers.” In the early 2000s, Askin served as legal counsel to Free World Dialup (FWD), “a startup that had the potential to dramatically disrupt the telecom sector” with its peer-to-peer IP network that could provide free global voice communications. Askin notes that “FWD paved the way for another startup—Skype. But FWD was Skype before Skype was Skype. The difference was that FWD had U.S. attorneys who put the reigns on FWD to seek FCC approvals to launch free of regulatory constraints.” Here’s what happened to FWD according to Askin:


In lightning regulatory speed (18 months), the FCC acknowledged that FWD was not a telecom provider subject to onerous telecom regulations. Sounds like a victory, right? Think again. During the time it took the FCC to greenlight FWD, the foreign founders of Skype proceeded apace with no regard for U.S. regulatory approvals. The result is that Skype had a two-year head start and a growing embedded user base, making it difficult for FWD, constrained by its U.S.-trained attorneys, to compete.


FWD would eventually shut down while Skype still thrives.


This shows how, no matter how well-intentioned any particular laws or regulation may be, they will be largely ineffective and possibly quite counter-productive when stacked against the realities of the fundamental “law of disruption” because they simply will not be able to keep up with the pace of technological change. “Emerging technologies change at the speed of Moore’s Law,” Downes notes, “leaving statutes that try to define them by their technical features quickly out of date.”


With information markets evolving at the speed of Moore’s Law, I have argued here before that we should demand that public policy do so as well. We can accomplish that by applying Moore’s Law to all current and future technology policy laws and regulations through two simple principles:



Principle #1 – Every new technology proposal should include a provision sunsetting the law or regulation 18 months to two years after enactment. Policymakers can always reenact the rule if they believe it is still sensible.
Principle #2 – Reopen all existing technology laws and regulations and reassess their worth. If no compelling reason for their continued existence can be identified and substantiated, those laws or rules should be repealed within 18 months to two years. If a rationale for continuing existing laws and regs can be identified, the rule can be re-implemented and Principle #1 applied to it.

If critics protest that some laws and regulation are “essential” and can make the case for new or continued action, nothing is stopping Congress from legislating to continue those efforts. But when they do, they should always include a 2-year sunset provision to ensure that those rules and regulations are given a frequent fresh look.


Better yet, we should just be doing a lot less legislating and regulating in this arena. The only way to ensure that more technologies and entrepreneurs don’t end up like FWD is to make sure they don’t have to deal with mountains of regulatory red tape to begin with.


 •  0 comments  •  flag
Share on Twitter
Published on November 13, 2013 08:45

November 12, 2013

Tom Brokaw on Old vs. New Media

Tom BrokawI think I owe Tom Brokaw an apology. When I first started reading his most recent Wall Street Journal column, “Imagine the Tweets During the Cuban Missile Crisis,” I assumed that I was in for one of those hyper-nosalgic essays about how the ‘good ‘ol days’ of mass media had passed us by and why the new media era is an unmitigated disaster. Instead, I was pleased to read his very balanced and sensible view of the old versus news media environments. Reflecting on the evolution of the media marketplace over the past 50 years since JFK’s assassination, Brokaw notes that:


The media climate has changed dramatically. The New Frontier, as Kennedy liked to call his administration, received a great deal of attention, but 50 years ago the major national information sources consisted of a handful of big-city daily newspapers, a few weekly news periodicals and two dominant TV network evening newscasts. Now the political news comes at us 24/7 on cable, through the air, the digital universe, on radio and print. And it comes to us more and more as opinion rather than a recitation of the facts as best they can be determined. News is a hit-and-run game, for the most part, with too little accountability for error.


This leads Brokaw to wonder if the amazing media metamorphosis has been, on net, positive or negative. “The virtual town square has been wired and expanded,” he notes, “but the question remains whether more voices make for a healthier political climate. With a keystroke we can easily move from an online credible source of information to a website larded with opinion or deliberately malicious erroneous claims. Have we simply enlarged the megaphone, cranked up the decibel level, and rallied the like-minded without regard to facts or consequences?”


While he’s obviously concerned about what we might label “quality control issues” associated with some new media outlets, Brokaw’s answer to the previous question he posed generally gets it right:


Still, as a child of an earlier media era, I much prefer the contemporary news and information culture—even when I am occasionally singled out by one side or the other for something I’ve said. I like the range of choices, the new voices, the ease of cross-checking and getting the most obscure information with a minimum of effort. This empowers us as no technological advancement has before. And while it may be easier to stay within one’s ideological comfort zone, left or right, it is a good deal more stimulating to wander beyond the boundaries to find what else is out there.


Good for Tom Brokaw. That generally reflects my own thinking on the issue, which can be found in the essays down below. Generally speaking, we’re better off with today’s world of information abundance than the old world of information scarcity, limited outlets, constrained choices, and homogenous fare.  That’s not to say everything is perfect in the new media ecosystem. In particular, Brokaw is right to point to the quality control issues that accompany a world were every voice can be heard. But we’re still figuring out ways to grapple with that problem, largely by encouraging still more voices to join the endless conversation and check the assertions made by others. As Brokaw correctly notes, “This empowers us as no technological advancement has before.” And it leads to more truth and wisdom in the long-run.


________________________


Additional Read ing:



Thoughts on Andrew Keen, Part 1: Why an Age of Abundance Really is Better than an Age of Scarcity
We Are Living in the Golden Age of Children’s Programming
Book Review: Eli Pariser’s “Filter Bubble”
Television: From Vast Wasteland to Vast Wonders
testimony at FCC’s Hearing on “Serving the Public Interest in the Digital Era”
Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society

 


 


 •  0 comments  •  flag
Share on Twitter
Published on November 12, 2013 07:37

Anupam Chander on free speech and cyberlaw

Post image for Anupam Chander on free speech and cyberlaw

Anupam Chander, Director of the California International Law Center and Martin Luther King, Jr. Hall Research Scholar at the UC Davis School of Law, discusses his recent paper with co-author Uyen P. Lee titled The Free Speech Foundations of Cyberlaw. Chander addresses how the first amendment promotes innovation on the Internet; how limitations to free speech vary between the US and Europe; the role of online intermediaries in promoting and protecting the first amendment; the Communications Decency Act; technology, piracy, and copyright protection; and the tension between privacy and free speech.


Download


Related Links

The Free Speech Foundations of CyberLaw, Chander, Lee
The Electronic Silk Road: How the Web Binds the World Together in Commerce, Amazon
Rules of the Digital Age: PW Talks with Anupam Chander, Publishers Weekly
Securing Privacy in the Internet Age, Amazon
 •  0 comments  •  flag
Share on Twitter
Published on November 12, 2013 03:00

October 31, 2013

Bitcoin is going mainstream. Here is why cypherpunks shouldn’t worry.

Deep Web Time CoverToday is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.


The fact is that Bitcoin is inching its way into the mainstream. Indeed, the NYT’s headline is “Bitcoin Pursues the Mainstream,” and this month’s issue of WIRED includes an article titled, “Bitcoin’s Radical Days Are Over. Here’s How to Take It Mainstream.


The radicals, however, are not taking this sitting down. Also today, Cody Wilson and Unsystem have launched a crowdfunding campaign to build an anonymizing wallet. In their explanatory video, they criticize the Bitcoin Foundation as “helping the United States” regulate Bitcon, presumably to hasten its mainstream adoption. “Their mission is a performance to both agree with, and maintain an independence from, regulatory power,” Wilson says. “But you can’t have it both ways.”



This is an internecine battle that I’ve observed in the Bitcoin community for years. That of the cypherpunks who see Bitcoin as an escape hatch from state control versus the entrepreneurs who are more interested in the network’s disruptive (and thus profitable) potential. While it might be a fool’s errand, I’d like to make the case that not only is the work of the two groups not in conflict, they actually benefit from each other.


I’ve been following Bitcoin since early 2011, and in April of that year I penned the first (yes) mainstream article about Bitcoin. It was in TIME.com, and it’s been credited with kicking off the first bubble. Since then my work has focused on the regulatory policy around Bitcoin and other crypto currencies, especially looking to educate policymakers about the workings and potential benefits of decentralized payments systems. Why am I so interested in this? My reasons are twofold and they track both the entrepreneurial and cypherpunk ideals, and yet I don’t think I’m bipolar.



First, I’m interested in Bitcoin because it is clearly a deeply disruptive technology that could result in profound economic and social benefits for the world, especially for the least fortunate. Yet as all new technologies that challenge existing interests and institutions, it is immediately targeted for precautionary and prophylactic regulation with little thought given to the costs of such regulation. Given that my entire career has been spent trying to keep the Internet free and unregulated, Bitcoin is a perfect fit for my attention. I am interested in helping policymakers get the cost-benefit analysis right, which I think is that the costs of regulating Bitcoin far outweigh the benefits.


This gets to the question of whether those of us engaged in educating policymakers are “helping the U.S. government” regulate Bitcoin, as Wilson claims. I guess that’s one way to see it, but let me offer another.


There are no doubt those like the Winklevoss twins who are seemingly inviting as much regulation as possible. (In Cameron Winklevoss’s words, “we love regulation.”) I certainly don’t share that view, and I doubt folks at the Foundation do, either. After all, the Foundation is headed by John Matonis, a man with bylines under such articles as “Don’t Let Bitcoin Morph into Govcoin” and “Money Laundering Is Financial Thoughtcrime”. To say Matonis is a handmaiden of the state is laughable.


Just because one communicates with regulators does not mean one is encouraging regulation. There is a distinction that needs to be made between those who are engaged with regulators in order to invite regulation, and those of us who are engaged in order to, as the tagline of this website reads, “keep politicians’ hands off the ’net.”


As I’ve said before, the choice before us is not whether we should want regulation, but what to do about it. Regulatory power is something that currently exists as a fact of the world whether one likes it or not. Given that regulatory bodies exist, and given that these bodies will decide what the state’s reaction to Bitcoin will be—from an attempt to ban it on one end of the spectrum, to “light-touch” or no regulation as we see in some countries on the other end—what is wrong with advocating for the latter end?


Now, you may not think engagement will ever work, and you may want to focus your efforts on “exit” rather than “voice.” I totally respect that approach, but the beauty of Bitcoin is that if some of us focus on “voice,” it does nothing to hamper those who want to work on “exit.” Indeed, I htink it will buy those folks some time. The genius of a decentralized design is that even if I fail to talk sense into regulators, and they issue draconian licensing, and identification, and reporting rules and the rest, there is nothing they can do to stop Wilson and Unsystem from developing Dark Wallet.


And that brings me to the second, more important reason that I care about Bitcoin: its censorship resistance. Today, the small handful of regulated payment processors that you can use to transact online can prevent you from spending your money as you see fit. Bitcoin explodes this state of affairs, making it impossible for government to exercise prior restraint of financial transactions. They may be able to punish you after the fact, but they can no longer prevent transactions from taking place.


The obvious illustration of why this is important is what happened to WIkiLeaks after it released the State Department cables. PayPal, Bank of American, MasterCard, and the rest prevented American citizens from making perfectly legal contributions to the group. Though the payments processors deny it, their actions were clearly the result of political pressure. Had WikiLeaks and all of its would-be contributors been using Bitcoin at the time, all of those contributions could not have been prevented.


That, in turn, brings me to why the work of even the most amoral and ideologically disinterested entrepreneurs is so important, and why it matters that they end up with as friendly a regulatory environment as possible. What the entrepreneurs are doing is building out Bitcoin’s public infrastructure, and they are making it more widely accepted and thus more widely used. In other words, they are making it mainstream, and that should be seen as a good thing even by the most radical. Here’s why.


Bitcoin is a network, and networks thrive on strong network effects. The more people use Bitcoin, even under a regulated system, the more stable the price becomes, the more merchants will accept bitcoins, the more processing power will be dedicated to the network (thus better securing it), and perhaps most importantly, the more mindshare Bitcoin will capture and the more politically difficult it will be to restrain it. Whatever their motivations, what these entrepreneurs are poised to do is grow the Bitcoin network, and that makes the network more valuable to everyone, including the radicals, whether the regulators like it or not.


Consider WikiLeaks again. In early 2010, when its PayPal account was frozen, WikiLeaks was not accepting bitcoins. Why not? And even if they had been accepting Bitcoins, it’s unlikely very many of the lay persons who wanted to contribute could have figured out how to acquire and send them. Why? The answer to both questions are network effects. The network effects were not there yet. Indeed they may not be there now. But imagine a world where Bitcoin is commonplace and (even if regulated) exchanges and wallets are second nature and so is paying for pizza or ordering a book online. In that world, WikiLeaks would not have even considered using PayPal, and a large network of people familiar and comfortable with cryptocurrency could not be prevented from making contributions.


And again, none of this would prevent those who want to rely on “exit” from doing so. In fact, a bigger and stronger network benefits those who choose to use something like Dark Wallet, even if that bigger network is built out by entrepreneurs complying with regulation, and even if it is populated mostly by mainstream users. Indeed, tools like Dark Wallet will remain essential to maintain Bitcoin’s censorship resistance. I can imagine that in a future WikiLeaks-style scenario, the government might put pressure on regulated exchanges and wallets not to process payments to certain addresses. In that case, if Bitcoin is sufficiently mainstream, people might then have the wherewithal to transfer some coins to a Dark Wallet before making a contribution. And if it’s sufficiently mainstream, it won’t just be a handful of cypherpunks who can do that.


As long as Bitcoin remains open and decentralized, cypherpunks and entrepreneurs are not working at cross purposes, no matter how suspicious they may be of each other. The real regulatory threats to Bitcoin are the bonkers bananas proposals to centralize Bitcoin like we saw in the recent WIRED article. Luckily, that article misses the point of not just Bitcoin, but of this moment in history. Decentralization is what makes Bitcoin genius, it’s what attracts both the radicals and the entrepreneurs, and it’s not going away.


 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2013 14:14

The Coming Fight Over the IP Transition

Last week, the House held a hearing about the so-called IP Transition. The IP Transition refers to the telephone industry practice of carrying all wire-based consumer services–voice, Internet, and television–via faster, better fiber networks and not on the traditional copper wires that had fewer capabilities. Most consumers have not and will not notice the change. The completed IP Transition, however, has enormous implications for how the FCC regulates. As one telecom watcher said, “What’s at stake? Everything in telecom policy.”


For 100 years or so, phone service has had a special place in regulatory law given its importance in connecting the public. Phone service was almost exclusively over copper wires, a service affectionately called “plain old telephone service” (POTS). AT&T became the government-approved POTS national monopolist in 1913 (which ended with the AT&T antitrust breakup in the 1980s). The deal was: AT&T got to be a protected monopolist while the government got to require AT&T provide various public benefits. The most significant of these is universal service–AT&T had to serve virtually every US household and charge reasonable rates even to remote (that is, expensive) customers.


To create more phone competitors to the Baby Bells–the phone companies spun off from the AT&T break-up in the 1980s–the Congress passed the 1996 Telecom Act and the FCC put burdens on the Baby Bells to allow new phone companies to lease the Baby Bells’ AT&T-created copper wires at regulated rates. The market changed in ways never envisioned in the 1990s however. Today, phone companies face competition–not from the new phone companies leasing the old monopoly infrastructure but from entirely different technologies. You can receive voice service from your cable company (“digital voice”), your “phone” company (POTS), your wireless company, and even Internet-based providers like Vonage and Skype. Increasingly, households are leaving POTS behind in favor of voice service from cable or wireless providers. Yet POTS providers–like Verizon and AT&T (which also offer wireless service)–must abide by monopoly-era regulations that their cable and wireless competitors–Comcast, Sprint, and others–don’t have to abide by.


Understanding the significance of the IP Transition requires (unfortunately) knowing a little bit about Title I and Title II of the 1996 Telecom Act. “Telecommunications services,” which are the phone companies with copper networks, are heavily regulated by the FCC under Title II. On the other hand, “information services,” which includes Internet service, are lightly regulated under Title I. This division made some sense in the 1990s. It is increasingly under stress now because burdened “telecommunications” companies like AT&T and Verizon are offering “information services” like Internet via DSL, FiOS, and U-Verse. Conversely, lightly-regulated “information services” companies like Comcast, Charter, and Time-Warner Cable are entering the regulated telephone market but face few of the regulatory burdens.


Which brings us to the IP Transition. As Title II phone companies replace their copper wires with fiber and deploy broadband networks to compete with cable companies, their customers’ phone service is being carried via IP packets. Functionally, these new networks act like a heavily-regulated Title II service since they carry voice, but they also act like the Title I broadband networks that cable providers built. So should these new fiber networks be burdened like Title II services or deregulated like Title I services? Or is it possible to achieve some middle ground using existing law? Those are the questions before the FCC and policymakers. Billions of dollars of investment will be accelerated or slowed and many firms will live or die depending on how the FCC and Congress act. Stay tuned.


 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2013 13:18

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.