Adam Thierer's Blog, page 40
January 16, 2015
Striking a Sensible Balance on the Internet of Things and Privacy
This week, the Future of Privacy Forum (FPF) released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” which I believe can help us find policy consensus regarding the privacy and security concerns associated with the Internet of Things (IoT) and wearable technologies. I’ve been monitoring IoT policy developments closely and I recently published a big working paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will appear shortly in the Richmond Journal of Law & Technology. I have also penned several other essays on IoT issues. So, I will be relating the FPF report to some of my own work.
The new FPF report, which was penned by Christopher Wolf, Jules Polonetsky, and Kelsey Finch, aims to accomplish the same goal I had in my own recent paper: sketching out constructive and practical solutions to the privacy and security issues associated with the IoT and wearable tech so as not to discourage the amazing, life-enriching innovations that could flow from this space. Flexibility is the key, they argue. “Premature regulation at an early stage in wearable technological development may freeze or warp the technology before it achieves its potential, and may not be able to account for technologies still to come,” the authors note. “Given that some uses are inherently more sensitive than others, and that there may be many new uses still to come, flexibility will be critical going forward.” (p. 3)
That flexible approach is at the heart of how the FPF authors want to see Fair Information Practice Principles (FIPPs) applied in this space. The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. The FPF authors correctly note that,
The FIPPs do not establish specific rules prescribing how organizations should provide privacy protections in all contexts, but rather provide high-level guidelines. Over time, as technologies and the global privacy context have changed, the FIPPs have been presented in different ways with different emphases. Accordingly, we urge policymakers to enable the adaptation of these fundamental principles in ways that reflect technological and market developments. (p. 4)
They continue on to explain how each of the FIPPS can provide a certain degree of general guidance for the IoT and wearable tech, but also caution that: “A rigid application of the FIPPs could inhibit these technologies from even functioning, and while privacy protections remain essential, a degree of flexibility will be key to ensuring the Internet of Things can develop in ways that best help consumer needs and desires.” (p. 4) And throughout the report, the FPF authors stress the need for the FIPPS to be “practically applied” and they nicely explain how the appropriate application of any particular one of the FIPPS “will depend on the circumstances.” For those reasons, they conclude by saying, “we urge policymakers to adopt a forward-thinking, flexible application of the FIPPs.” (p. 11)
The approach that Wolf, Polonetsky, and Finch set forth in this new FPF report is very much consistent with the policy framework I sketched out in my forthcoming law review article. “The need for flexibility and adaptability will be paramount if innovation is to continue in this space,” I argued. In essence, best practices need to remain just that: best practices — not fixed, static, top-down regulatory edicts. As I noted:
Regardless of whether they will be enforced internally by firms or by ex post FTC enforcement actions, best practices must not become a heavy-handed, quasi-regulatory straitjacket. A focus on security and privacy by design does not mean those are the only values and design principles that developers should focus on when innovating. Cost, convenience, choice, and usability are all important values too. In fact, many consumers will prioritize those values over privacy and security — even as activists, academics, and policymakers simultaneously suggest that more should be done to address privacy and security concerns.
Finally, best practices for privacy and security issues will need to evolve as social acceptance of various technologies and business practices evolve. For example, had “privacy by design” been interpreted strictly when wireless geolocation capabilities were first being developed, these technologies might have been shunned because of the privacy concerns they raised. With time, however, geolocation technologies have become a better understood and more widely accepted capability that consumers have come to expect will be embedded in many of their digital devices. Those geolocation capabilities enable services that consumers now take for granted, such as instantaneous mapping services and real-time traffic updates.
This is why flexibility is crucial when interpreting the privacy and security best practices.
The only thing I think that was missing from the FPF report was a broader discussion of other constructive privacy and security solutions that involve education, etiquette, and empowerment-based solutions. I would have also liked to have seen some discussion of how other existing legal mechanisms — privacy torts, contractual enforcement mechanisms, property rights, state “peeping Tom” law, and existing privacy statutes — might cover some of the hard cases that could develop on this front. I discuss those and other “bottom-up” solutions in Section IV of my law review article and note that they can contribute to the sort of “layered” approach we need to address privacy and security concerns for the IoT and wearable tech.
In any event, I encourage everyone to check out the new Future of Privacy Forum report as well as the many excellent best practice guidelines they have put together to help innovators adopt sensible privacy and security best practices. FPF has done some great work on this front.
Additional Reading
essay: “A Nonpartisan Policy Vision for the Internet of Things,” December 11, 2014.
slide presentation: “Policy Issues Surrounding the Internet of Things & Wearable Technology,” September 12, 2014.
law review article: “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” November 2014.
essay: “CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?” January 8, 2014.
essay: “The Growing Conflict of Visions over the Internet of Things & Privacy,” January 14, 2014.
oped: “Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013
agency filing: My Filing to the FTC in its ‘Internet of Things’ Proceeding, May 31, 2013
book: Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom , 2014.
video: Cap Hill Briefing on Emerging Tech Policy Issues, June 2014.
essay: “What’s at Stake with the FTC’s Internet of Things Workshop,” November 18, 2013.
law review article: “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” September 16, 2014.
Again, We Humans Are Pretty Good at Adapting to Technological Change
Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.
The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:
Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study. . . . Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.
I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. I elaborated on these issues in a lengthy essay last summer entitled, “Muddling Through: How We Learn to Cope with Technological Change.” I borrowed the term “muddling through” from Joel Garreau’s terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. Garreau argued that history can be viewed “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”
Garreau associated this with what he called the “Prevail” scenario and he contrasted it with the “Heaven” scenario, which believes that technology drives history relentlessly, and in almost every way for the better, and the “Hell” scenario, which always worries that “technology is used for extreme evil, threatening humanity with extinction.” Under the “Prevail” scenario, Garreau argued, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he concluded. (p. 154) Or, as John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:
technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations. Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.
In my essay last summer, I sketched out the reasons why I think this “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process. Again, it comes down to the fact that people and institutions learned to cope with technological change and become more resilient over time. It’s a learning process, and we humans are good at rolling with the punches and finding new baselines along the way. While “muddling through” can sometimes be quite difficult and messy, we adjust to most of the new technological realities we face and, over time, find constructive solutions to the really hard problems.
So, while it’s always good to reflect on the challenges of life in an age of never-ending, rapid-fire technological change, there’s almost never cause for panic. Read my old essay for more discussion on why I remain so optimistic about the human condition.
Regulatory Capture: FAA and Commercial Drones Edition
Regular readers know that I can get a little feisty when it comes to the topic of “regulatory capture,” which occurs when special interests co-opt policymakers or political bodies (regulatory agencies, in particular) to further their own ends. As I noted in my big compendium, “Regulatory Capture: What the Experts Have Found“:
While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity. Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.
Indeed, the more I highlight the problem of regulatory capture and offer concrete examples of it in practice, the more push-back I get from true believers in the idea of “independent” agencies. Even if I can get them to admit that history offers countless examples of capture in action, and that a huge number of scholars of all persuasions have documented this problem, they will continue to persist that, WE CAN DO BETTER! and that it is just a matter of having THE RIGHT PEOPLE! who will TRY HARDER!
Well, maybe. But I am a realist and a believer in historical evidence. And the evidence shows, again and again, that when Congress (a) delegates broad, ambiguous authority to regulatory agencies, (b) exercises very limited oversight over that agency, and then, worse yet, (c) allows that agency’s budget to grow without any meaningful constraint, then the situation is ripe for abuse. Specifically, where unchecked power exists, interests will look to exploit it for their own ends.
In any event, all I can do is to continue to document the problem of regulatory capture in action and try to bring it to the attention of pundits and policymakers in the hope that we can start the push for real agency oversight and reform. Today’s case in point comes from a field I have been covering here a lot over the past year: commercial drone innovation.
Yesterday, via his Twitter account, Wall Street Journal reporter Christopher Mims brought this doozy of an example of regulatory capture to my attention, which involves Federal Aviation Administration officials going to bat for the pilots who frequently lobby the agency and want commercial drone innovations constrained. Here’s how Jack Nicas begins the WSJ piece that Mims brought to my attention:
In an unfolding battle over U.S. skies, it’s man versus drone. Aerial surveyors, photographers and moviemaking pilots are increasingly losing business to robots that often can do their jobs faster, cheaper and better. That competition, paired with concerns about midair collisions with drones, has made commercial pilots some of the fiercest opponents to unmanned aircraft. And now these aviators are fighting back, lobbying regulators for strict rules for the devices and reporting unauthorized drone users to authorities. Jim Williams, head of the Federal Aviation Administration’s unmanned-aircraft office, said many FAA investigations into commercial-drone flights begin with tips from manned-aircraft pilots who compete with those drones. “They’ll let us know that, ’Hey, I’m losing all my business to these guys. They’re not approved. Go investigate,’” Mr. Williams said at a drone conference last year. “We will investigate those.”
Well, that pretty much says it all. If you’re losing business because an innovative new technology or pesky new entrant has the audacity to come onto your turf and compete, well then, just come on down to your friendly neighborhood regulator and get yourself a double serving of tasty industry protectionism!
And so the myth of “agency independence” continues, and perhaps it will never die. It reminds me of a line from those rock-and-roll sages in Guns N’ Roses: ” I’ve worked too hard for my illusions just to throw them all away!”
January 15, 2015
Dispatches from CES 2015 on Privacy Implications of New Technologies
Over at the International Association of Privacy Professionals (IAPP) Privacy Perspectives blog, I have two “Dispatches from CES 2015″ up. (#1 & #2) While I was out in Vegas for the big show, I had a chance to speak on a panel entitled, “Privacy and the IoT: Navigating Policy Issues.” (Video can be found here. It’s the second one on the video playlist.) Federal Trade Commission (FTC) Chairwoman Edith Ramirez kicked off that session and stressed some of the concerns she and others share about the Internet of Things and wearable technologies in terms of the privacy and security issues they raise.
Before and after our panel discussion, I had a chance to walk the show floor and take a look at the amazing array of new gadgets and services that will soon hitting the market. A huge percentage of the show floor space was dedicated to IoT technologies, and wearable tech in particular. But the show also featured many other amazing technologies that promise to bring consumers a wealth of new benefits in coming years. Of course, many of those technologies will also raise privacy and security concerns, as I noted in my two essays for IAPP. The first of my dispatches focuses primarily on the Internet of Things and wearable technologies that I saw at CES. In my second dispatch, I discuss the privacy and security implications of the increasing miniaturization of cameras, drone technologies, and various robotic technologies (especially personal care robots).
I open the first column by noting that “as I was walking the floor at this year’s massive CES 2015 tech extravaganza, I couldn’t help but think of the heartburn that privacy professionals and advocates will face in coming years.” And I close the second dispatch by concluding that, “The world of technology is changing rapidly and so, too, must the role of the privacy professional. The technologies on display at this year’s CES 2015 make it clear that a whole new class of concerns are emerging that will require IAPP members to broaden their issue set and find constructive solutions to the many challenges ahead.” Jump over to the Privacy Perspectives blog to read more.
January 14, 2015
Trouble Ahead for Municipal Broadband
President Obama recently announced his wish for the FCC to preempt state laws that make building public broadband networks harder. Per the White House, nineteen states “have held back broadband access . . . and economic opportunity” by having onerous restrictions on municipal broadband projects.
Much of the White House claims are PR nonsense. Most of these so-called state restrictions on public broadband are reasonable considering the substantial financial risk public networks pose to taxpayers. Minnesota and Colorado, for instance, require approval from local voters before spending money on a public network. Nevada’s “restriction” is essentially that public broadband is only permitted in the neediest, most rural parts of the state. Some states don’t allow utilities to provide broadband because utilities have a nasty habit of raising, say, everyone’s electricity bills because the money-losing utility broadband network fails to live up to revenue expectations. And so on.
It is an abuse of the English language for political activists to say municipal broadband is just a competitor to existing providers. If the federal government dropped over $100 million in a small city to build publicly-owned grocery stores with subsidized food, local grocery stores would, of course, strenuously object that this is patently unfair and harms private grocers. This is what the US government did in Chattanooga, using $100 million to build a public network. The US government has spent billions on broadband, and much of it goes to public broadband networks. The activists’ response to the carriers, who obviously complain about this “competition,” is essentially, “maybe now you’ll upgrade and compete harder.” It’s absurd on its face.
Public networks are unwise and costly. Every dollar diverted to some money-losing public network is one less to use on worthy societal needs. There are serious problems with publicly-funded retail broadband networks. A few come to mind:
1. The economic benefits of municipal broadband are dubious. A recent Mercatus economics paper by researcher Brian Deignan showed disappointing results for municipal broadband. The paper uses 23 years of BLS data from 80 cities that have deployed broadband and analyzes municipal broadband’s effect on 1) quantity of businesses; 2) employee wages; and 3) employment. Ultimately, the data suggest municipal broadband has almost zero effect on the private sector.
On the plus side, municipal broadband is associated with a 3 percent increase in the number of business establishments in a city. However, there is a small, negative effect on employee wages. There is no effect on private employment but the existence of a public broadband network is associated with a 6 percent increase in local government employment. The substantial taxpayer risk for such modest economic benefits leads many states to reasonably conclude these projects aren’t worth the trouble.
2. There are serious federalism problems with the FCC preempting state laws. Matthew Berry, FCC Commissioner Pai’s chief of staff, explains the legal risks. Cities are creatures of state law and states have substantial powers to regulate what cities do. In some circumstances, Congress can preempt state laws, but as the Supreme Court has held, for an agency to preempt state laws, Congress must provide a clear statement that the FCC is authorized to preempt. Absent a clear statement from Congress, it’s unlikely the FCC could constitutionally preempt state laws regulating municipal broadband.
3. Broadband networks are hard work. Tearing up streets, attaching to poles, and wiring homes, condos, apartments is expensive and time-consuming. It costs thousands of dollars per home passed and the take-up rates are uncertain. Truck-rolls for routine maintenance and customer service cost hundreds of dollars per pop. Additionally, broadband network design is growing increasingly complex as several services converge to IP networks. Interconnection requires complex commercial agreements. Further, carriers are starting to offer additional services using software-defined networks and network function virtualization. I’m skeptical that city managers can stay cutting-edge years into the future. The costs for failed networks will fall to taxpayers.
4. City governments are just not very good at supplying triple play services, as the Phoenix Center and others have pointed out. People want phone, Internet, and television in one bill (and don’t forget video-on-demand service). Cities will often find that there is a lack of interest in a broadband connection that doesn’t also provide traditional television as well. Google Fiber (not a public network, obviously) initially intended to offer only broadband service. However, they quickly found out that potential subscribers wanted their broadband and video bundled together into one contract. If the very competent planners at Google Fiber weren’t aware of this consumer habit, the city planners in Moose Lake and Peoria budgeting for municipal broadband may miss it, too. Further, city administrators are not particularly good at negotiating competitive video bundles (municipal cable revealed this) because of their small size and lack of expertise.
5. A municipal network can chase away commercial network expansion and investment. This, of course, is the main complaint of the cable and telco players. If there is a marginal town an ISP is considering serving or upgrading, the presence of a “public competitor” makes the decision easy. Competing against a network with ready access to taxpayer money is senseless.
6. When cities build networks where ISPs already are serving the public, ISPs do not take it laying down, either. ISPs use their considerable size and industry expertise to their advantage, like adding must-have channels to basic cable packages. The economics are particularly difficult for a city entering the market. Broadband networks have high up-front costs but fairly low marginal costs. This makes price reductions by incumbents very attractive in order to limit customer defections to the entrant. Dropping the price or raising the speeds in neighborhoods where the city builds and frustrating city customer acquisition is a common practice. Apparently some cities didn’t learn their lesson in the late-1990s when municipal cable was a (short-lived) popular idea. Cities often hemorrhaged tax dollars when faced with hard-ball tactics and their penetration rates never reached the optimistic projections.
There are other complications that turn public broadband into expensive boondoggles. People often say in surveys they would pay more for ultra-fast broadband but when actually offered it, many refuse to pay higher prices for higher speeds, particularly when the TV channels offered in the bundle are paltry compared to the “slower” existing providers. When cities do lose money, and they often do, a utility-run broadband network will often cross-subsidize the failing broadband service. Electric utility customers’ dollars are then diverted to maintaining broadband. Further, private carriers can drag lawsuits out to prevent city networks. And your run-of-the-mill city contractor corruption and embezzlement are also possibilities.
I can imagine circumstances where municipal broadband makes sense. However, the President and the FCC are doing the public a disservice by promoting widespread publicly-funded broadband in violation of state laws. This political priority, combined with the probable Title II order next month, signals an inauspicious start to 2015.
January 13, 2015
Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation
I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:
Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.
Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?
That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption. When it comes to “Trolley Problem”-like ethical questions, Hagemann and I argue that, “these ethical considerations need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:
Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)
None of this should be read to suggest that the ethical issues being raised by some philosophers or other pundits are unimportant. To the contrary, they are raising legitimate concerns about how ethics are “baked-in” to the algorithms that control autonomous or semi-autonomous systems. It is vital we continue to debate the wisdom of the choices made by the companies and programmers behind those technologies and consider better ways to inform and improve their judgments about how to ‘optimize the sub-optimal,’ so to speak. After all, when you are making decisions about how to minimize the potential for harm — including the loss of life — there are many thorny issues that must be considered and all of them will have downsides. Smith considers a few when he notes:
Automation does not mean an end to uncertainty. How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.
Again, these are all valid questions deserving serious exploration, but we’re not having this discussion in a vacuum. Ivory Tower debates cannot be divorced from real-world realities. Although road safety has been improving for many years, people are still dying at a staggering rate due to vehicle-related accidents. Specifically, in 2012, there were 33,561 total traffic fatalities (92 per day) and 2,362,000 people injured (6,454 per day) in over 5,615,000 reported crashes. And, to reiterate, the bulk of those accidents were due to human error.
That is a staggering toll and anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms. Smith argues, correctly in my opinion, that “a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. … [T]his simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.”
Quite right. Indeed, the next time someone poses an an ethical thought experiment along the lines of the Trolley Problem, do what I do and reverse the equation. Ask them about the ethics of slowing down the introduction of a technology into our society that would result in a (potentially significant) lowering of the nearly 100 deaths and over 6,000 injuries that occur because of vehicle-related fatalities each day in the United States. Because that’s no hypothetical thought experiment; that’s the world we live in right now.
______________
(P.S. The late, great political scientist Aaron Wildavsky crafted a framework for considering these complex issues in his brilliant 1988 book, Searching for Safety. No book has had a more significant influence on my thinking about these and other “risk trade-off” issues since I first read it 25 years ago. I cannot recommend it highly enough. I discussed Wildavsky’s framework and vision in my recent little book on “Permissionless Innovation.” Readers might also be interested in my August 2013 essay, “On the Line between Technology Ethics vs. Technology Policy,” which featured an exchange with ethical philosopher Patrick Lin, co-editor of an excellent collection of essays on Robot Ethics: The Ethical and Social Implications of Robotics. You should add that book to your shelf if you are interested in these issues.
January 9, 2015
How the FCC Killed a Nationwide Wireless Broadband Network
Many readers will recall the telecom soap opera featuring the GPS industry and LightSquared and the subsequent bankruptcy of LightSquared. Economist Thomas W. Hazlett (who is now at Clemson, after a long tenure at the GMU School of Law) and I wrote an article published in the Duke Law & Technology Review titled Tragedy of the Regulatory Commons: Lightsquared and the Missing Spectrum Rights. The piece documents LightSquared’s ambitions and dramatic collapse. Contrary to popular reporting on this story, this was not a failure of technology. We make the case that, instead, the FCC’s method of rights assignment led to the demise of LightSquared and deprived American consumers of a new nationwide wireless network. Our analysis has important implications as the FCC and Congress seek to make wide swaths of spectrum available for unlicensed devices. Namely, our paper suggests that the top-down administrative planning model is increasingly harming consumers and delaying new technologies.
Read commentary from the GPS community about LightSquared and you’ll get the impression LightSquared is run by rapacious financiers (namely CEO Phil Falcone) who were willing to flaunt FCC rules and endanger thousands of American lives with their proposed LTE network. LightSquared filings, on the other hand, paint the GPS community as defense-backed dinosaurs who abused the political process to protect their deficient devices from an innovative entrant. As is often the case, it’s more complicated than these morality plays. We don’t find villains in this tale–simply destructive rent-seeking triggered by poor FCC spectrum policy.
We avoid assigning fault to either LightSquared or GPS, but we stipulate that there were serious interference problems between LightSquared’s network and GPS devices. Interference is not an intractable problem, however. Interference is resolved everyday in other circumstances. The problem here was intractable because GPS users are dispersed and unlicensed (including government users), and could not coordinate and bargain with LightSquared when problems arose. There is no feasible way for GPS companies to track down and compel users to use more efficient devices, for instance, if LightSquared compensated them for the hassle. Knowing that GPS mitigation was unfeasible, LightSquared’s only recourse after GPS users objected to the new LTE network was through the political and regulatory process, a fight LightSquared lost badly. The biggest losers, however, were consumers, who were deprived of another wireless broadband network because FCC spectrum assignment prevented win-win bargaining between licensees.
Our paper provides critical background to this dispute. Around 2004, because satellite phone spectrum was underused, the FCC permitted satellite phone licensees flexibility to repurpose some of their spectrum for use in traditional cellular phone networks. (Many people are appalled to learn that spectrum policy still largely resembles Soviet-style command-and-control. The FCC tells the wireless industry, essentially: “You can operate satellite phones only in band X. You can operate satellite TV in band Y. You can operate broadcast TV in band Z.” and assigns spectrum to industry players accordingly.) Seeing this underused satellite phone spectrum, LightSquared acquired some of this flexible satellite spectrum so that LightSquared could deploy a nationwide cellular phone network in competition with Verizon Wireless and AT&T Mobility. LightSquared had spent $4 billion in developing its network and reportedly had plans to spend $10 billion more when things ground to a halt.
In early 2012, the Department of Commerce objected to LightSquared’s network on the grounds that the network would interfere with GPS units (including, reportedly, DOD and FAA instruments). Immediately, the FCC suspended LightSquared’s authorization to deploy a cellular network and backtracked on the 2004 rules permitting cellular phones in that band. Three months later, LightSquared declared bankruptcy. This was a non-market failure, not a market failure. This regulatory failure obtains because virtually any interference to existing wireless operations is prohibited even if the social benefits of a new wireless network are vast.
This analysis is not simply scholarly theory about the nature of regulation and property rights. We provide real-world evidence that supports our notion that, had the FCC assigned flexible, de facto property rights to GPS licensees like the FCC does in some other bands, rather than fragmented unlicensed users, LightSquared might be in operation today serving millions with wireless broadband. Our evidence comes, in fact, from LightSquared’s deals with non-GPS parties. Namely, LightSquared had interference problems with another satellite licensee on adjacent spectrum–Inmarsat.
Inmarsat provides public safety, aviation, and national security applications and hundreds of thousands of devices to government and commercial users. The LightSquared-Inmarsat interference problems were unavoidable but because Inmarsat had de facto property rights to its spectrum, it could internalize financial gains and coordinate with LightSquared. The result was classic Coasian bargaining. The two companies swapped spectrum and activated an agreement in 2010 in which LightSquared would pay Inmarsat over $300 million. Flush with cash and spectrum, Inmarsat could rationalize its spectrum and replace devices that wouldn’t play nicely with LightSquared LTE operations.
These trades avoided the non-market failure the FCC produced by giving GPS users fragmented, non-exclusive property rights. When de facto property rights are assigned to licensees, contentious spectrum border disputes typically give way to private ordering. The result is regular spectrum swaps and sales between competitors. Wireless licensees like Verizon, AT&T, Sprint, and T-Mobile deal with local interference and unauthorized operations daily because they have enforceable, exclusive rights to their spectrum. The FCC, unfortunately, never assigned these kinds of spectrum rights to the GPS industry.
The evaporation of billions of dollars of LightSquared funds was a non-market failure, not a market failure and not a technology failure. The economic loss to consumers was even greater than LightSquared’s. Different FCC rules could have permitted welfare-enhancing coordination between LightSquared and GPS. The FCC’s error was the nature of rights the agency assigned for GPS use. By authorizing the use of millions of unlicensed devices adjacent to LightSquared’s spectrum, the FCC virtually ensured that future attempts to reallocate spectrum in these bands would prove contentious. Going forward, the FCC should think far less about which technologies they want to promote and more about the nature of spectrum rights assigned. For tech entrepreneurs and policy entrepreneurs to create innovative new wireless products, they need well-functioning spectrum markets. The GPS experience shows vividly what to avoid.
January 5, 2015
My Writing on Internet of Things (Thus Far)
I’ve spent much of the past year studying the potential public policy ramifications associated with the rise of the Internet of Things (IoT). As I was preparing some notes for my Jan. 6th panel discussing on “Privacy and the IoT: Navigating Policy Issues” at this year’s 2015 CES show, I went back and collected all my writing on IoT issues so that I would have everything in one place. Thus, down below I have listed most of what I’ve done over the past year or so. Most of this writing is focused on the privacy and security implications of the Internet of Things, and wearable technologies in particular.
I plan to stay on top of these issues in 2015 and beyond because, as I noted when I spoke on a previous CES panel on these issues, the Internet of Things finds itself at the center of what we might think of a perfect storm of public policy concerns: Privacy, safety, security, intellectual property, economic / labor disruptions, automation concerns, wireless spectrum issues, technical standards, and more. When a new technology raises one or two of these policy concerns, innovators in those sectors can expect some interest and inquires from lawmakers or regulators. But when a new technology potentially touches all of these issues, then it means innovators in that space can expect an avalanche of attention and a potential world of regulatory trouble. Moreover, it sets the stage for a grand “clash of visions” about the future of IoT technologies that will continue to intensify in coming months and years.
That’s why I’ll be monitoring developments closely in this field going forward. For now, here’s what I’ve done on this issue as I prepare to head out to Las Vegas for another CES extravaganza that promises to showcase so many exciting IoT technologies.
essay: “A Nonpartisan Policy Vision for the Internet of Things,” December 11, 2014.
slide presentation: “Policy Issues Surrounding the Internet of Things & Wearable Technology,” September 12, 2014.
law review article: “ The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation ,” November 2014.
essay: “CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?” January 8, 2014.
essay: “The Growing Conflict of Visions over the Internet of Things & Privacy,” January 14, 2014.
oped: “Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013
agency filing: My Filing to the FTC in its ‘Internet of Things’ Proceeding, May 31, 2013
book: Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom , 2014.
video: Cap Hill Briefing on Emerging Tech Policy Issues, June 2014.
essay: “What’s at Stake with the FTC’s Internet of Things Workshop,” November 18, 2013.
law review article: “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” September 16, 2014.
Internet of Things & Wearable Technology: Unlocking the Next Wave of Data-Driven Innovation from Adam Thierer
December 31, 2014
Hack Hell
2014 was quite the year for high-profile hackings and puffed-up politicians trying to out-ham each other on who is tougher on cybercrime. I thought I’d assemble some of the year’s worst hits to ring in 2015.
In no particular order:
Home Depot: The 2013 Target breach that leaked around 40 million customer financial records was unceremoniously topped by Home Depot’s breach of over 56 million payment cards and 53 million email addresses in July. Both companies fell prey to similar infiltration tactics: the hackers obtained passwords from a vendor of each retail giant and exploited a vulnerability in the Windows OS to install malware in the firms’ self-checkout lanes that collected customers’ credit card data. Millions of customers became vulnerable to phishing scams and credit card fraud—with the added headache of changing payment card accounts and updating linked services. (Your intrepid blogger was mysteriously locked out of Uber for a harrowing 2 months before realizing that my linked bank account had changed thanks to the Home Depot hack and I had no way to log back in without a tedious customer service call. Yes, I’m still miffed.)
The Fappening: 2014 was a pretty good year for creeps, too. Without warning, the prime celebrity booties of popular starlets like Scarlett Johansson, Kim Kardashian, Kate Upton, and Ariana Grande mysteriously flooded the Internet in the September event crudely immortalized as “The Fappening.” Apple quickly jumped to investigate its iCloud system that hosted the victims’ stolen photographs, announcing shortly thereafter that the “celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions” rather than any flaw in its system. The sheer volume produced and caliber of icons violated suggests this was not the work of a lone wolf, but a chain reaction of leaks collected over time triggered by one larger dump. For what it’s worth, some dude on 4chan claimed the Fappening was the product of an “underground celeb n00d-trading ring that’s existed for years.” While the event prompted a flurry of discussion about online misogyny, content host ethics, and legalistic tugs-of-war over DMCA takedown requests, it unfortunately did not generate a productive conversation about good privacy and security practices like I had initially hoped.
The Snappening: The celebrity-targeted Fappening was followed by the layperson’s “Snappening” in October, when almost 100,000 photos and 10,000 personal videos sent through the popular Snapchat messaging service, some of them including depictions of underage nudity, were leaked online. The hackers did not target Snapchat itself, but instead exploited a third-party client called SnapSave that allowed users to save images and videos that would normally disappear after a certain amount of time on the Snapchat app. (Although Snapchat doesn’t exactly have the best security record anyways: In 2013, contact information for 4.6 million of its users were leaked online before the service landed in hot water with the FTC earlier this year for “deceiving” users about their privacy practices.) The hackers received access to 13GB library of old Snapchat messages and dumped the images on a searchable online directory. As with the Fappening, discussion surrounding the Snappening tended to prioritize scolding service providers over promoting good personal privacy and security practices to consumers.
Las Vegas Sands Corp.: Not all of these year’s most infamous hacks sought sordid photos or privateering profit. 2014 also saw the rise of the revenge hack. In February, Iranian hackers infiltrated politically-active billionaire Sheldon Adelson’s Sands Casino not for profit or data, but for pure punishment. Adelson, a staunchly pro-Israel figure and partial owner of many Israeli media companies, drew intense Iranian ire after fantasizing about detonating an American nuclear warhead in the Iranian desert as a threat during his speech at Yeshiva University. Hackers released crippling malware into the Sands IT infrastructure early in the year, which proceeded to shut down email services, wipe hard drives clean, and destroy thousands of company computers, laptops, and expensive servers. The Sands website was also hacked to display “a photograph of Adelson chumming around with [Israeli Prime Minister] Netanyahu,” along with the message “Encouraging the use of Weapons of Mass Destruction, UNDER ANY CONDITION, is a Crime,” and a data dump of Sands employees’ names, titles, email addresses, and Social Security numbers. Interestingly, Sands was able to contain the damage internally so that guests and gamblers had no idea of the chaos that was ravaging casino IT infrastructure. Public knowledge of the hack did not serendipitously surface until early December, around the time of the Sony hack. It is possible that other large corporations have suffered similar cyberattacks this year in silence.
JP Morgan: You might think that one of the world’s largest banks would have security systems that are near impossible to crack. This was not the case at JP Morgan. From June to August, hackers infiltrated JP Morgan’s sophisticated security system and siphoned off massive amounts of sensitive financial data. The New York Times reports that “the hackers appeared to have obtained a list of the applications and programs that run on JPMorgan’s computers — a road map of sorts — which they could crosscheck with known vulnerabilities in each program and web application, in search of an entry point back into the bank’s systems, according to several people with knowledge of the results of the bank’s forensics investigation, all of whom spoke on the condition of anonymity.” Some security experts suspect that a nation-state was ultimately behind the infiltration due to the sophistication of the attack and the fact that the hackers neglected to immediately sell or exploit the data or attempt to steal funds from consumer accounts. The JP Morgan hack set off alarm bells among influential financial and governmental circles since banking systems were largely considered to be safe and impervious to these kinds of attacks.
Sony: What a tangled web this was! On November 24, Sony employees were greeted by the mocking grin of a spooky screen skeleton informed they had been “Hacked by the #GOP” and that there was more to come. It was soon revealed that Sony’s email and computer systems had been infiltrated and shut down while some 100 terabytes of data had been stolen. The hackers proceeded to leak embarrassing company information, including emails in which executives made racial jokes, compensation data revealing a considerable gender wage disparity, and unreleased studio films like Annie and Mr. Turner. We also learned about “Project Goliath,” a conspiracy among the MPAA, Sony, and five other studios (Universal, Sony, Fox, Paramount, Warner Bros., and Disney) to revise the spirit of SOPA and attack piracy on the web “by working with state attorneys general and major ISPs like Comcast to expand court power over the way data is served.” (Goliath was their not-exactly-subtle codeword for Google.) Somewhere along the way, a few folks got wild notions that North Korea was behind this attack because of the nation’s outrage at the latest Rogen romp, The Interview. Most cybersecurity experts doubt that the hermit nation was behind the attack, although the official KCNA statement enthusiastically “supports the righteous deed.” The absurdity of the official narrative did not prevent most of our world-class journalistic and political establishment from running with the story and beating the drums of cyberwar. Even the White House and FBI goofed. The FBI and State Department still maintain North Korean culpability, even as research compiled by independent security analysts points more and more to a collection of disgruntled former Sony employees and independent lulz-seekers. Troublingly, the Obama administration publicly entertained cyberwar countermeasures against the troubled communist nation on such slim evidence. A few days later, the Internet in North Korea was mysteriously shut down. I wonder what might have caused that? Truly a mess all around.
LizardSquad: Speaking of Sony hacks, the spirit of LulzSec is alive in LizardSquad. On Christmas day, the black hat collective knocked out Sony’s Playstation network and Microsoft’s Xbox servers with a massive distributed denial of service (DDoS) attack to the great vengeance and furious anger of gamers avoiding family gatherings across the country. These guys are not your average script-kiddies. NexusGuard chief scientist Terrence Gareu warns the unholy lizards boast an artillery that far exceeds normal DDoS attacks. This seems right, given the apparent difficulty that giants Sony and Microsoft had in responding to the attacks. For their part, LizardSquad claims the strength of their attack exceeded the previous record against Cloudflare this February. Megaupload Internet lord Kim Dotcom swooped to save gamers’ Christmas festivities with a little bit of information age, uh, “justice.” The attacks were allegedly called off after Dotcom offered the hacking collective 3,000 Mega vouchers (normally worth $99 each) for his content hosting empire if they agreed to cease. The FBI is investigating the lizards for the attacks. LizardSquad then turned their attention to the TOR network, creating thousands of new relays and comprising a worrying portion of the network’s roughly 8,000 relays in an effort to unmask users. Perhaps they mean to publicize the networks’ vulnerabilities? The group’s official Twitter bio reads, “I cry when Tor deserves to die.” Could this be related to the recent Pando-Tor drama that reinvigorated skepticism of Tor? As with any online brouhaha involving clashing numbers of privacy-obsessed computer whizzes with strong opinions, this incident has many hard-to-read layers (sorry!). While the Tor campaign is still developing, LizardSquad has been keeping busy with it’s newly-launched Lizard Stresser, a distributed DDoS tool that anyone can use for a small fee. These lizards appear very intent on making life as difficult as possible for the powerful parties they’ve identified as enemies and will provide some nice justifications for why governments need more power to crack down on cybercrime.
What a year! I wonder what the next one will bring.
One sure bet for 2015 is increasing calls for enhanced regulatory powers. Earlier this year, Eli and I wrote a Mercatus Research paper explaining why top-down solutions to cybersecurity problems can backfire and make us less secure. We specifically analyzed President Obama’s developing Cybersecurity Framework, but the issues we discuss apply to other rigid regulatory solutions as well. On December 11, in the midst of North Korea’s red herring debut in the Sony debacle, the Senate passed the Cybersecurity Act of 2014, which contains many of the same principles outlined in the Framework. The Act, which still needs House approval, strengthens the Department of Homeland Security’s role in controlling cybersecurity policy by directing DHS to create industry cybersecurity standards and begin routine information-sharing with private entities.
Ranking Member of the Senate Homeland Security Committee, Tom Coburn, had this to say: “Every day, adversaries are working to penetrate our networks and steal the American people’s information at a great cost to our nation. One of the best ways that we can defend against cyber attacks is to encourage the government and private sector to work together and share information about the threats we face. ”
While the problems of poor cybersecurity and increasing digital attacks are undeniable, the solutions proposed by politicians like Coburn are dubious. The federal government should probably try to get its own house in order before it undertakes to save the cyberproperties of the nation. The Government Accountability Office reports that the federal government suffered from almost 61,000 cyber attacks and data breaches last year. The DHS itself was hacked in 2012,while a 2013 GAO report criticized DHS for poor security practices, finding that “systems are being operated without authority to operate; plans of action and milestones are not being created for all known information security weaknesses or mitigated in a timely manner; and baseline security configuration settings are not being implemented for all systems.” GAO also reports that when federal agencies develop cybersecurity practices like those encouraged in the Cybersecurity Framework or the Cybersecurity Act of 2014, they are inconsistently and insufficiently implemented.
Given the federal government’s poor track record managing its own system security, we shouldn’t expect miracles when they take a leadership role for the nation.
Another trend to watch will be the development of a more robust cybersecurity insurance market. The Wall Street Journal reports that 2014’s rash of hacking attacks stimulated sales of formerly-obscure cyberinsurance packages.
The industry had suffered in the past due to its novelty and lack of previous data to use to accurately price insurance packages. This year, demand has been sufficiently stimulated and actuaries have been familiar enough with the relevant risks that the practice has finally become mainstream. Policies can cover “the costs of [data breach] investigations, customer notifications and credit-monitoring services, as well as legal expenses and damages from consumer lawsuits” and “reimbursement for loss of income and extra expenses resulting from suspension of computer systems, and provide payments to cover recreation of databases, software and other assets that were corrupted or destroyed by a computer attack.” As the market matures, cybersecurity insurers may start more actively assessing firms’ digital vulnerabilities and recommend improvements to their systems in exchange for a lower premium payment, as is common in other insurance markets.
Still, nothing ever beats good old-fashioned personal responsibility. One of the easiest ways to ensure privacy and security for yourself online is to take the time to learn how to best protect yourself or your business by developing good habits, using the right services, and remaining conscientious about your digital activities. That’s my New Year’s resolution. I think it should be yours, too!
Happy New Year’s, all!
October 29, 2014
How Much Tax?
As I and others have recently noted, if the Federal Communications Commission reclassifies broadband Internet access as a “telecommunications” service, broadband would automatically become subject to the federal Universal Service tax—currently 16.1%, or more than twice the highest state sales tax (California–7.5%), according to the Tax Foundation.
Erik Telford, writing in The Detroit News, has reached a similar conclusion.
U.S. wireline broadband revenue rose to $43 billion in 2012 from $41 billion in 2011, according to one estimate. “Total U.S. mobile data revenue hit $90 billion in 2013 and is expected to rise above $100 billion this year,” according to another estimate. Assuming that the wireline and wireless broadband industries as a whole earn approximately $150 billion this year, the current 16.1% Universal Service Contribution Factor would generate over $24 billion in new revenue for government programs administered by the FCC if broadband were defined as a telecommunications service.
The Census Bureau reports that there were approximately 90 million households with Internet use at home in 2012. Wireline broadband providers would have to collect approximately $89 from each one of those households in order to satisfy a 16.1% tax liability on earnings of $50 billion. There were over 117 million smartphone users over the age of 15 in 2011, according to the Census Bureau. Smartphones would account for the bulk of mobile data revenue. Mobile broadband providers would have to collect approximately $137 from each of those smartphone users to shoulder a tax liability of 16.1% on earnings of $100 billion.
The FCC adjusts the Universal Service Contribution Factor quarterly with the goal of generating approximately $8 billion annually to subsidize broadband for some users. One could argue that if the tax base increases by $150 billion, the FCC could afford to drastically reduce the Universal Service Contribution Factor. However, nothing would prevent the FCC from raising the contribution factor back up into the double digits again in the future. The federal income tax started out at 2%.
The FCC is faced with the problem of declining international and interstate telecommunications revenues upon which to impose the tax—since people are communicating in more ways besides making long-distance phone calls—and skeptics might question whether the FCC could resist the temptation to make vast new investments in the “public interest.” For example, at this very moment the FCC is proposing to update the broadband speed required for universal service support to 10 Mbps.
What Role Will the States Play?
Another interesting question is how the states will react to this. There is a long history of state public utility commissions and taxing authorities acting to maximize the scope of state regulation and taxes. Remember that telecommunications carriers file tax returns in every state and local jurisdiction—numbering in the thousands.
In Smith v. Illinois (1930), the United States Supreme Court ruled that there has to be an apportionment of telecommunication service expenses and revenue between interstate and intrastate jurisdictions. The Communications Act of 1934 is scrupulously faithful to Smith v. Illinois.
In 2003, Minnesota tried to regulate voice over Internet Protocol (VoIP) services the same as “telephone services.” The FCC declined to rule whether VoIP was a telecommunication service or an information service, however it preempted state regulation anyway when it concluded that it is “impossible or impractical to separate the intrastate components of VoIP service from its interstate components.” The FCC emphasized
the significant costs and operational complexities associated with modifying or procuring systems to track, record and process geographic location information as a necessary aspect of the service would substantially reduce the benefits of using the Internet to provide the service, and potentially inhibit its deployment and continued availability to consumers.
The U.S. Court of Appeals for the Eighth Circuit agreed with the FCC in 2007. Unfortunately, this precedent did not act as a brake on the FCC.
In 2006—while the Minnesota case was still working it’s way through the courts—the FCC was concerned that the federal Universal Service Fund was “under significant strain”; the commission therefore did not hesitate to establish universal service contribution obligations for providers of fixed interconnected VoIP services. The FCC had no difficulty resolving the problem of distinguishing between intrastate and interstate components: It simply took the telephone traffic percentages reported by long-distance companies (64.9% interstate versus 35.1% intrastate) and applyed them to interconnected VoIP services. Vonage Holdings Corp., the litigant in the Minnesota case (as well as in the subsequent Nebraska case, discussed below), did not offer fixed interconnected VoIP Service, so it was unaffected.
Before long, Nebraska tried to require “nomadic” interconnected VoIP service providers (including Vonage) to collect a state universal service tax on the intrastate portion (35.1%) of their revenues. Following the Minnesota precedent, the Eighth Circuit rejected the Nebraska universal service tax.
Throughout these legal and regulatory proceedings, the distinction between “fixed” and “nomadic” VoIP services was observed. According to the Nebraska court,
Nomadic service allows a customer to use the service by connecting to the Internet wherever a broadband connection is available, making the geographic originating point difficult or impossible to determine. Fixed VoIP service, however, originates from a fixed geographic location. * * * As a result, the geographic originating point of the communications can be determined and the interstate and intrastate portions of the service are more easily distinguished.
Nebraska argued that it wasn’t impossible at all to determine the geographic origin of nomadic VoIP service—the state simply used the customer’s billing address as a proxy for where nomadic services occurred. If Nebraska had found itself in a more sympathetic tribunal, it might have won.
The bottom line is that the FCC has been successful so far in imposing limits on state taxing authority—at least within the Eighth Circuit (Arkansas, Iowa, Minnesota, Missouri, Nebraska, North Dakota and South Dakota)—but there are no limits on the FCC.
Conclusion
Reclassifying broadband Internet access as a telecommunications service will have significant tax implications. Broadband providers will have to collect from consumers and remit to government approximately $24 billion—equivalent to approximately $89 per household for wireline Internet access and approximately $137 per smartphone. The FCC could reduce these taxes, but it will be under enormous political pressure to collect and spend the money. States can be expected to seek a share of these revenues, resulting in litigation that will create uncertainty for consumers and investors.

Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
