Adam Thierer's Blog, page 125

June 22, 2011

Russ Roberts on 'Why Technology Doesn't Destroy Jobs'

You wouldn't think that policymakers need to be reminded that technological progress raises living standards and creates new (and better) employment opportunities. Alas, some comments President Obama made in a speech last week seemed to link technology to job losses. "There are some structural issues with our economy where a lot of businesses have learned to become much more efficient with a lot fewer workers," he said. "You see it when you go to a bank and you use an ATM, you don't go to a bank teller, or you go to the airport and you're using a kiosk instead of checking in at the gate."



In an essay in today's Wall Street Journal, one of my Mercatus Center colleagues Russ Roberts, a professor of economics at George Mason University, brilliantly deconstructs this logic and points out why technology doesn't destroy jobs:



Somehow, new jobs get created to replace the old ones. Despite losing millions of jobs to technology and to trade, even in a recession we have more total jobs than we did when the steel and auto and telephone and food industries had a lot more workers and a lot fewer machines.

Why do new jobs get created? When it gets cheaper to make food and clothing, there are more resources and people available to create new products that didn't exist before. Fifty years ago, the computer industry was tiny. It was able to expand because we no longer had to have so many workers connecting telephone calls. So many job descriptions exist today that didn't even exist 15 or 20 years ago. That's only possible when technology makes workers more productive.


Read the whole thing. Great stuff.




 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2011 10:52

Hang Up on the Talking Tax

My latest Forbes column notes how "Taxes On Talking Are On the Rise Across the U.S." with levies on mobile phones and devices skyrocketing.  I build my argument around data and arguments found in Dan Rothschild's excellent recent Mercatus Center paper, which makes "The Case Against Taxing Cell Phone Subscribers," as well as an important recent study by Scott Mackey, an economist and partner at KSE Partners LLP, which documents the growing burden of these wireless taxes and fees.



"Wireless users now face a combined federal, state, and local tax and fee burden of 16.3%, a rate two times higher than the average retail sales tax rate and the highest wireless rate since 2005," Mackey finds. Mobile tax rates range from a high of 23.7% in Nebraska to a low of 6.9% in Oregon.  48 states have an average combined wireless tax rate above 11%.  These burdensome taxes on talking just don't make any sense, argues Rothschild. "There is no economic justification for these high tax rates: reducing cell phone ownership is not a public policy goal, cell phone use by one customer does not affect other customers or other people, and these taxes fall disproportionately on lower-income households."



You can read my entire essay here, but also make sure to re-read Dan Rothschild's guest post here at the TLF on the issue. It's much better than my own treatment.  For me, the key point is this: If the primary policy goal in this arena is to build out a first-class communications and data infrastructure and make sure all Americans have access to it, discriminatory taxes on wireless services and networks are highly counter-productive. Policymakers should hang up on the Talking Tax.




 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2011 06:31

June 21, 2011

Neelie Kroes & Privacy By Design vs. Privacy by Default

The European Commission has a new report out today on "Implementation of the Safer Social Networking Principles for the EU." It's a status report on the implementation of "Safer Social Networking Principles for the EU", a "self-regulatory" agreement the EC brokered with 17 social networking sites and other online operators back in 2009. (Co-regulatory would be more accurate here, since the EC is steering, and industry is simply rowing.) The goal was to make the profiles of minors more private and provide other safeguards.



Generally speaking, the EC's evaluation suggests that great progress has been made, although there's always room for improvement. For example, the report found that "13 out of the 14 sites tested provide safety information, guidance and/or educational materials specifically targeted at minors;" "Safety information for minors is quite clear and age-appropriate on all sites that provide it, good progress since the first assessment last year; "Reporting mechanisms are more effective now than in 2010;" and most sites have improved Terms of Use that are easy for minors to understand and/or a child-friendly version of the Terms of Use or Code of Conduct; and many "provide safety information for children and parents which is both easy to find and to understand." Again, there's always room for improvement, but the general direction is encouraging, especially considering how new many of these sites are.



Unfortunately, Neelie Kroes, Vice President of the European Commission for the Digital Agenda, spun the report in the opposite direction. She issued a statement saying:



I am disappointed that most social networking sites are failing to ensure that minors' profiles are accessible only to their approved contacts by default. I will be urging them to make a clear commitment to remedy this in a revised version of the self-regulatory framework we are currently discussing. This is not only to protect minors from unwanted contacts but also to protect their online reputation. Youngsters do not fully understand the consequences of disclosing too much of their personal lives online. Education and parental guidance are necessary, but we need to back these up with protection until youngsters can make decisions based on full awareness of the consequences.


This position is misguided, as explained below. But here's the crucial point: What this Kroes statement once again proves is that, ultimately, every major public policy debate about online privacy and child safety comes down to a question of where to set the defaults and who should set them. Rarely, however, do policymakers or regulatory advocates acknowledge the downsides associated with mandating highly restrictive defaults from the top down.



Back in 2008, I penned a paper on "The Perils of Mandatory Parental Controls and Restrictive Defaults" in which I argued that, "Government regulation mandating restrictive parental control defaults for media devices would likely have unintended consequences and would not achieve the goal of better protecting children from objectionable content, whereas increased consumer education efforts would be more effective in helping parents control their child's media consumption." The general point was that if government defaulted all sites and/or devices to be in a "locked-down" state right out of the gates, it would mean products and services would, in essence, be shipped to market in a crippled state.  This would have a variety of unintended consequences, including consumer confusion and such restrictions would discourage the maximum amount of utility / experimentation associated with those products and services.



The same is true of highly restrictive privacy defaults. How are you even to network with others and make new friends if everything is private by default? Worst of all is the fact that the EC seems to want websites to make it practically impossible for minors to even search for each other. That's increasingly how users of all ages connect with their real world acquaintances, for whom they may have no other contact information. Isn't the point of social networking to be social and share more? If a child or a parent doesn't like that openness, why isn't it sufficient that they be empowered to change that setting on their own?  Why must the law mandate it by default and tell them what is supposedly best for them?



Nicklas Lundblad & Betsy Masiello made a similar point in their important recent essay on "Opt-In Dystopias." They noted that more formal opt-in consent models may involve many trade-offs and downsides that need to be considered relative to opt-out models, which are currently more prevalent online. "The decisions a user makes under an opt-in model are less informed" they argue, because "the initial decision to opt-in to a service is made without any knowledge of what value that service provides," and, therefore, "under an opt-in regime a decision can probably never be wholly informed." They continue: "If instead of thinking about privacy decisions as requiring ex-ante consent, we thought about systems that structured an ongoing contractual negotiation between the user and service provider, we might mitigate some of these harmful effects."



The crucial point here is that choice should lie with the consumer and not be set from above. Companies should empower the consumer — including kids — with more and better tools and then let them decide what their privacy settings should be. Government need not "nudge" consumers or companies in paternalistic ways based upon the values of unelected bureaucrats. Most importantly, policymakers should not not conflate "privacy by design" with privacy by default. Let experimentation continue and let consumers make these determinations for themselves.




 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2011 14:18

Ronald Rychlak on online gambling laws

Post image for Ronald Rychlak on online gambling laws

On the podcast this week, Ronald Rychlak, Mississippi Defense Lawyers Association Professor of Law and Associate Dean at the University of Mississippi School of Law, discusses his new article in the Mississipi Law Journal entitled, The Legal Answer to Cyber-Gambling. Rychlak briefly comments on the history of gambling in the United States and the reasons usually given to prohibit or regulate gambling activity. He then talks about why it's so difficult to regulate internet gambling and gives examples of how regulators have tried to enforce online gambling laws, which often involves deputizing middlemen — financial institutions. Rychlak also discusses his legal proposal: create an official framework to endorse, regulate, and tax online gambling entities.



Related Links


The Legal Answer to Cyber-Gambling , by Rychlak
"Outgoing Miss. gaming chief warns of challenges," Bloomberg
"Barton to Offer Online Poker Bill," National Journal
"Accused strikes plea deal in poker case," by Joseph Menn


To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2011 06:00

June 20, 2011

EFF Gone Wobbly on Bitcoin

My expectations of the Electronic Frontier Foundation are high. It's an organization that does a tremendous amount of good, advocating for rights to freely use new technologies. Alas, a blog post about how good EFF is would be as interesting as a newspaper story about the lack of house fires in Springfield. So I'll share how I feel EFF has gone wobbly on Bitcoin.



Bitcoin, the very interesting distributed digital currency that is inflation-, surveillance-, and confiscation-resistant, has been getting a lot of attention. EFF announced yesterday, though, that it would reverse course and stop accepting donations denominated in Bitcoin.



Its justifications, laid out in a blessedly brief and well-organized blog post, were three:



1. We don't fully understand the complex legal issues involved with creating a new currency system. Bitcoin raises untested legal concerns related to securities law, the Stamp Payments Act, tax evasion, consumer protection and money laundering, among others. And that's just in the U.S. While EFF is often the defender of people ensnared in legal issues arising from new technologies, we try very hard to keep EFF from becoming the actual subject of those fights or issues. Since there is no caselaw on this topic, and the legal implications are still very unclear, we worry that our acceptance of Bitcoins may move us into the possible subject role.


My insta-reaction was to joke: "Related: ACLU to stop bringing 'right to petition' cases." That's a little ambiguous, so: Imagine that the government took a position in litigation that suing the government was not protected by the First Amendment, but was in fact actionable. Under EFF's logic—avoid becoming the subject of a rights fight—the ACLU would not fight the government on that issue. Luckily, the ACLU would fight the government on that issue—as fiercely or more fiercely than any other!



There are some ambiguities. Bitcoin is legally novel. But every new technology is legally novel. EFF didn't shy away from publishing commentary online while publisher liability was legally ambiguous.



Accepting a Bitcoin donation is like accepting a donation in kind, in contract rights, or in cat food. If it's worth taking, you go figure out how to accept the donation and square it with existing law. If it's clearly illegal, you don't accept the contribution. (EFF would have said so if they felt it was.) If it's in the middle, a defender of rights to use technology should be inclined toward accepting Bitcoin and clarifying the law, not away from accepting Bitcoin in deference to legal ambiguity and free-ranging government power.



Bitcoin is a currency, and it trades on currency markets, so you would treat it like a donation tendered in non-U.S. currency. If EFF were to start getting contributions in soybean futures, or rights to free oil changes at JiffyLube, I think it would have figured out how to accept those contributions, the absence of caselaw notwithstanding.



EFF, of course, is not "creating" a new currency system—it's just one user. Its potential liability drops off precipitously because of that, and because EFF would scrupulously ensure that it's acceptance of Bitcoin—just like any contribution—should not violate money laundering laws (while such regulation exists).



But if the government argues that any use of Bitcoin is money laundering, well that's worth fighting, isn't it? Because that's a huge claim to power. Bitcoin is a value transfer protocol, and it can be used for anything, good or bad. If you pay your taxes on Bitcoin transactions that would have been lawful if conducted in U.S. dollars, why should the use of this less expensive and faster value-transfer protocol be grounds for punishment?



Were this issue to have arisen in the context of a similarly decentralized domain name system EFF would probably have been there, full of effrontery to government power, both promoting and using such a system.



2. We don't want to mislead our donors. When people make a donation to a nonprofit like EFF, they expect us to use their donation to support our work. Because the legal territory around exchanging Bitcoins into cash is still uncertain, we are not comfortable spending the many Bitcoins we have accumulated. Because of this, we're giving the Bitcoins that have been accumulated, or that may accumulate in the future, in the account set up in our name to the Bitcoin faucet, so that they can continue to circulate in the community.


For the most part, this point just restates the first, retooling it to sound like a service to donors and not timidity in the face of legal ambiguity. Donors can expect good faith effort on EFF's part to use their donations, however denominated, in support of its mission. It doesn't undermine the mission if the form of donation is non-U.S.-dollars.



In fact, refusing donations in Bitcoin seems to detract from EFF's mission because it denies the organization a source of funds. The donors who gave U.S. dollars expecting EFF to defend things like Bitcoin may feel mislead by EFF's reluctance to do so.



3. People were misconstruing our acceptance of Bitcoins as an endorsement of Bitcoin. We were concerned that some people may have participated in the Bitcoin project specifically because EFF accepted Bitcoins, and perhaps they therefore believed the investment in Bitcoins was secure and risk-free. While we've been following the Bitcoin movement with a great degree of interest, EFF has never endorsed Bitcoin. In fact, we generally don't endorse any type of product or service – and Bitcoin is no exception.


So put a disclaimer up that says "We don't endorse any type of product or service – and Bitcoin is no exception." That solves the problem with potential miscontrued inferences from accepting Bitcoin.



To be cheeky, I'll wonder aloud whether EFF's acceptance of U.S. dollars is an endorsement of that currency—with it's relentless loss of value to inflation, heavy contribution to surveillance, and amenability to illegal government seizure. Well, of course they don't. And there's no real inference from accepting a currency that one endorses a currency. Similarly, if you send an email to EFF written in French, and they use the ideas in your email, EFF is not endorsing French.



The point here is not that EFF or any organization must use Bitcoin. There are plenty of reasons to be skeptical of its utility—it might not be convertible to other forms of value easily enough; it might not have enough reliable value; holding it might involve security risks that remain too great. But legal ambiguities around a novel technology are not a sound basis for a digital rights organization to decline using that technology. That's a reason to embrace and protect that novel technology.



I look forward to EFF reversing course once again, invigorated in its fight for digital liberty by fear of my mighty blog wrath.




 •  0 comments  •  flag
Share on Twitter
Published on June 20, 2011 22:56

June 14, 2011

What's really motivating the pursuit of Google?

I have an op-ed up at Main Justice on FTC Chairman Leibowitz' recent comment in response the a question about the FTC's investigation of Google that the FTC is looking for a "pure Section Five case."  With Main Justice's permission, the op-ed is re-printed here:



There's been a lot of chatter around Washington about federal antitrust regulators' interest in investigating Google, including stories about an apparent tug of war between agencies. But this interest may be motivated by expanding the agencies' authority, rather than by any legitimate concern about Google's behavior.



Last month in an interview with Global Competition Review, FTC Chairman Jon Leibowitz was asked whether the agency was "investigating the online search market" and he made this startling revelation:



"What I can say is that one of the commission's priorities is to find a pure Section Five case under unfair methods of competition. Everyone acknowledges that Congress gave us much more jurisdiction than just antitrust. And I go back to this because at some point if and when, say, a large technology company acknowledges an investigation by the FTC, we can use both our unfair or deceptive acts or practice authority and our unfair methods of competition authority to investigate the same or similar unfair competitive behavior . . . . "



"Section Five" refers to Section Five of the Federal Trade Commission Act. Exercising its antitrust authority, the FTC can directly enforce the Clayton Act but can enforce the Sherman Act only via the FTC Act, challenging as "unfair methods of competition" conduct that would otherwise violate the Sherman Act. Following Sherman Act jurisprudence, traditionally the FTC has interpreted Section Five to require demonstrable consumer harm to apply.



But more recently the commission—and especially Commissioners Rosch and Leibowitz—has been pursuing an interpretation of Section Five that would give the agency unprecedented and largely-unchecked authority. In particular, the definition of "unfair" competition wouldn't be confined to the traditional measures–reduction in output or increase in price–but could expand to, well, just about whatever the agency deems improper.



Commissioner Rosch has claimed that Section Five could address conduct that has the effect of "reducing consumer choice"—an effect that a very few commentators support without requiring any evidence that the conduct actually reduces consumer welfare. Troublingly, "reducing consumer choice" seems to be a euphemism for "harm to competitors, not competition," where the reduction in choice is the reduction of choice of competitors who may be put out of business by competitive behavior.



The U.S. has a long tradition of resisting enforcement based on harm to competitors without requiring a commensurate, strong showing of harm to consumers–an economically-sensible tradition aimed squarely at minimizing the likelihood of erroneous enforcement. The FTC's invigorated interest in Section Five contemplates just such wrong-headed enforcement, however, to the inevitable detriment of the very consumers the agency is tasked with protecting.



In fact, the theoretical case against Google depends entirely on the ways it may have harmed certain competitors rather than on any evidence of actual harm to consumers (and in the face of ample evidence of significant consumer benefits).



Google has faced these claims at a number of levels. Many of the complaints against Google originate from Microsoft (Bing), Google's largest competitor. Other sites have argued that that Google impairs the placement in its search results of certain competing websites, thereby reducing these sites' ability easily to access Google's users to advertise their competing products. Other sites that offer content like maps and videos complain that Google's integration of these products into its search results has impaired their attractiveness to users.



In each of these cases, the problem is that the claimed harm to competitors does not demonstrably translate into harm to consumers.



For example, Google's integration of maps into its search results unquestionably offers users an extremely helpful presentation of these results, particularly for users of mobile phones. That this integration might be harmful to MapQuest's bottom line is not surprising—but nor is it a cause for concern if the harm flows from a strong consumer preference for Google's improved, innovative product. The same is true of the other claims; harm to competitors is at least as consistent with pro-competitive as with anti-competitive conduct, and simply counting the number of firms offering competing choices to consumers is no way to infer actual consumer harm.



In the absence of evidence of Google's harm to consumers, then, Leibowitz appears more interested in using Google as a tool in his and Rosch's efforts to expand the FTC's footprint. Advancing the commission's "priority" to "find a pure Section Five case" seems to be more important than the question of whether Google is actually doing anything harmful.



When economic sense takes a back seat to political aggrandizement, we should worry about the effect on markets, innovation and the overall health of the economy.



 




 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2011 08:08

Steven Levy on how Google works

Post image for Steven Levy on how Google works

On the podcast this week, Steven Levy, a columnist for Wired and author of the tech classic Hackers, among many other books, discusses his latest book, In The Plex: How Google Thinks, Works, and Shapes Our Lives. Levy talks about Googliness, the attribute of silliness and dedication embodied by Google employees, and whether it's diminishing. He discusses Google's privacy council, which discusses and manages the company's privacy issues, and the evolution of how the company has dealt with issues like scanning Gmail users' emails, scanning books for the Google Books project, and deciding whether to incorporate facial recognition technology in Google Goggles. Levy also talks about prospects for a Google antitrust suit and the future of Google's relationship with China.



Related Links


In The Plex: How Google Thinks, Works, and Shapes Our Lives
"The Problem With Success: With a market capitalization of $184 billion, can Google maintain its reputation as a brash iconoclast?" Wall Street Journal
"Life 'In The Plex': The Future Of Google," NPR


To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2011 06:00

June 13, 2011

Visualizing Information Abundance: Every 60 Seconds on the Web

My colleague Cord Blomquist brought to my attention this amazing info-graphic, which depicts the stunning volume of activity unfolding every 60 seconds online. [Click on the image to enlarge it.] It appears the graphic was created by Go-Globe.com, a web design firm, although I've not been able to find the original. [Cord found it in this collection of "35 Cool Infographics for Web and Graphic Designers."]  I find this graphic especially interesting because it helps bolster the work I've been doing lately with Jerry Brito about the challenges faced by information control regimes. [See our recent essays on the topic: 1, 23, 4, 5.]  Most recently, I put together a list of "Some Metrics Regarding the Volume of Online Activity" but I'd been searching for a really excellent visualization to help tell this story and this is probably the best one I've ever seen. Absolutely amazing numbers.






 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2011 12:11

"Adventure Windows" Revisited: Why We Struggle with New Trends & Technologies

I enjoyed this Wall Street Journal essay by Daniel H. Wilson on "The Terrifying Truth About New Technology."  It touches on many of the themes I've discussed here in my essays on techno-panics, fears about information overload, and the broader battle throughout history between technology optimists and pessimists about the impact of new technologies on culture, life, and learning. Wilson correctly notes that:



The fear of the never-ending onslaught of gizmos and gadgets is nothing new. The radio, the telephone, Facebook — each of these inventions changed the world. Each of them scared the heck out of an older generation. And each of them was invented by people who were in their 20s.


He continues:



Young people adapt quickly to the most absurd things. Consider the social network Foursquare, in which people not only willingly broadcast their location to the world but earn goofy virtual badges for doing so. My first impulse was to ignore Foursquare—for the rest of my life, if I have to.



And that's the problem. As we get older, the process of adaptation slows way down. Unfortunately, we depend on alternating waves of assimilation and accommodation to adapt to a constantly changing world. For [developmental psychologist Jean] Piaget, this balance between what's in the mind and what's in the environment is called equilibrium. It's pretty obvious when equilibrium breaks down. For example, my grandmother has phone numbers taped to her cellphone. Having grown up with the Rolodex (a collection of numbers stored next to the phone), she doesn't quite grasp the concept of putting the numbers in the phone.



Why are we so nostalgic about the technology we grew up with? Old people say things like: "This new technology is stupid. I liked (new, digital) technology X better when it was called (old, analog) technology Y. Why, back in my day…." Which leads inexorably to, "I just don't get it."



There's a simple explanation for this phenomenon: "adventure window." At a certain age, that which is familiar and feels safe becomes more important to you than that which is new, different, and exciting. Think of it as "set-in-your-ways syndrome."



I first heard the term "adventure window" on an NPR program back in 2006 during a wonderful Robert Krulwich spot entitled "Does Age Quash Our Spirit of Adventure?" Krulwich's piece featured a neuroscientist who had been studying why it is that humans (indeed, all mammals) have an innate tendency to lose their willingness to try new things after a certain point in their lives. He called this our "adventure window." The neuroscientist came to study this phenomenon after growing increasingly annoyed with his young male research assistant, who would come to work every day of the week listening to something new and quite different than the day before. Meanwhile, the much older neuroscience professor lamented the fact that he had been listening to the same Bob Marley tape seemingly forever.



Simply stated, our willingness to try new things and experiment with new forms of culture — our "adventure window" — fades rapidly after certain key points in life, as we gradually get set in our ways. For the professor and many of the rest of us, our adventure window comes slamming shut sometime in our mid-30s.



This is doubly interesting to me because it provides another explanation for why one generation protests an older generation's censorial ways only to themselves become advocates of repressing the next generation's culture and technology when they grow older.  Many cultural critics and average folk alike always seem to think the best days are behind us and the current good-for-nothing generation and their new-fangled gadgets and culture are garbage. This is the reason I opened my old report on "Parental Controls & Online Child Protection" in the following way:



What effect does media exposure have on our children? That question has generated heated debates from one generation to the next. From the waltz to rock and roll to rap music, from movies to comic books to video games, from radio and television to the Internet and social networking websites — every new media format or technology spawns a fresh debate about the potential negative effects it might have on kids. Parents, educators, academics, social scientists, media pundits, and many others all offer their opinions, but rarely is any consensus reached.


"These concerns stretch back to the birth of literacy itself," notes Vaughan Bell in his excellent Slate essay from February 2010 entitled, "Don't Touch That Dial! A History of Media Technology Scares, from the Printing Press to Facebook."  Bell observed:



Worries about information overload are as old as information itself, with each generation reimagining the dangerous impacts of technology on mind and brain. From a historical perspective, what strikes home is not the evolution of these social concerns, but their similarity from one century to the next, to the point where they arrive anew with little having changed except the label.


Indeed, as I point out in my old "Net optimists vs. pessimisms" essay and subsequent book chapter, you can actually trace this debate all the way back to the well-known allegorical tale from Plato's Phaedrus about the dangers of the written word. The debate between King Thamus and the god Theuth has been the template for every debate about culture and technology that has followed.  Read it for yourself and see.  Basically, King Thamus' adventure window had slammed shut and the spoken tradition of learning was where he wanted progress to stop. Theuth stressed the benefits of a new technology — writing — for memory and learning.



And so the debate continues. It will never end.



 



 




 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2011 11:24

June 12, 2011

Spectrum Reform Now!

Last week the Senate Commerce Committee passed–with deep bi-partisan support–the Public Safety Spectrum and Wireless Innovation Act.



The bill, co-sponsored by Committee Chairman Jay Rockefeller and Ranking Member Kay Bailey Hutchison, is a comprehensive effort to resolve several long-standing stalemates and impending crises having to do with one of the most critical 21st century resources: radio spectrum.



My analysis of the bill appears today on CNET. See "Spectrum reform, public safety network move forward in Senate."



The proposed legislation is impressive in scope; it offers new and in some cases novel solutions to more than half-a-dozen spectrum-related problems, including:




Voluntary incentive auctions – The bill authorizes the FCC to coordinate "voluntary incentive auctions" (VIA) of under-utilized spectrum from over-the-air TV broadcasters to better uses, including mobile broadband. Broadcasters giving up some or all of their licensed spectrum would share the proceeds with the government. The FCC has been asking for this authority for two years.


Public safety network – The bill would break the logjam over the long-desired nationwide interoperable public safety network. It would create a new non-profit public-private partnership to build the network, with an outright grant of the D-block of 700 Mhz. spectrum. (That block, freed up as part of the 2009 transition to digital TV, has sat idle since a failed auction in 2008.) Financing for the build-out would come from proceeds of the VIAs. The public safety network has been in limbo since it was first proposed soon after 9/11. (The proposed bill is S. 911.)


Spectrum inventory - The FCC would be required to complete a comprehensive inventory of existing licenses (which, amazingly, doesn't exist) within 180 days. President Obama ordered the agency to complete the inventory over a year ago, but so far only a "baseline" inventory has been created.


Secondary markets – The FCC would be required to begin a rulemaking to review current limits to secondary spectrum markets that interfere with liquidity, in the hopes of making them more robust. (VIAs could take years to organize and conduct.)


Public spectrum - The National Telecommunications and Information Administration would be required to identify significant blocks of underutilized federal spectrum allocations and make them available for auction by the FCC.


Spectrum innovation - The National Science Foundation and other grant-making agencies would be required to accelerate research grants for new technologies that would make spectrum use more efficient.


Repacking – While the FCC can't require broadcasters to participate in VIAs, it can force them to move to nearby channels if doing so would free up more valuable blocks of spectrum for auction. A fund would be created to compensate stations for the disruption of switching channels.




The range of issues that S.911 deals with suggests the breadth of the current spectrum crisis. Here it is in a nutshell. Radio frequencies are a limited public resource. Up until recently, however, there's been more than enough to go around. Following the advice of Nobel prizewinning economist Ronald A. Coase, the FCC has used auctions to find the best and highest use for this resource, generating significant revenue in the process.



But the digital age has changed the dynamics of spectrum. Mobile uses are exploding, as are mobile devices, mobile applications, mobile users and mobile everything else. Moore's Law is rapidly overtaking FCC law once again. Existing wireless networks are groaning under the strain of volume that has increased 8000% since the launch of the iPhone.



Last year's National Broadband Plan, for example, predicted that 300 Mhz. of additional spectrum would need to be found in the next five years to keep mobile broadband on track.



But the government's current processes of finding and allocating more spectrum are simply too slow to keep pace with the current wave of technological innovation. It will get worse as 3G moves to 4G and from there–well, who knows? All we can safely predict is that the "G"s will keep coming, and arrive faster all the time. So radical re-thinking of spectrum management is urgent. We need serious spectrum policy reform, and we need it yesterday.



Part of the solution will come from technology itself, including innovation to make more efficient use of existing allocations, expanding the range of usable spectrum for more uses, capabilities to dynamically share spectrum and rebalance loads, and so on. There are impressive developments in these and other strategies for coping with the potential of spectrum exhaustion, but no one can say with confidence that the solutions will outpace the problems.



The bigger issue underlying spectrum exhaustion is the glacial pace with which current regulatory systems work to rebalance allocations.



Once a license is granted, the licensee can largely rely on keeping it indefinitely. If they operate in a stable or shrinking market (such as over-the-air broadcast, which the Consumer Electronics Association said recently has shrunk to only 8% of U.S. households), there's no incentive to optimize the property, which, for the licensee, is a sunk cost.



Given the limits of secondary markets, there's also little incentive to find more efficient uses of the allocation and free up spectrum that is no longer needed for its licensed purpose. Indeed, even for operators who want to exit the market in part or in whole, use limitations on existing allocations make transfer through secondary markets cumbersome if not impossible.



Even if the FCC unblocks these markets, game theory problems may constrain the effectiveness of either the VIAs or the secondary markets.



Federal users, of course, feel no competitive threat to optimize their allocations, and fall back to the conversation-ending "national defense" excuse whenever the possibility emerges of giving up some of the frequencies they are warehousing.



And then there are state and local authorities, who also share jurisdiction over communications. Limits on cell tower construction, use, and other technical improvements aren't addressed in the proposed legislation. But they are equally to blame for the crisis mentality.



S. 911 is a good start toward removing some of the institutional barriers that limit our flexibility in rebalancing spectrum needs and spectrum allocations. But it's only a start. If the information revolution is to continue uninterrupted, we need a lot more improvements.



And soon.




 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2011 16:59

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.