Adam Thierer's Blog, page 74

February 19, 2013

Ronald Cass on intellectual property

Ronald A. Cass

Ronald A. Cass, Dean Emeritus of Boston University School of Law, discusses his new book, Laws of Creation: Property Rights in the World of Ideas, which he co-authored with Boston University colleague Keith Hylton. Written as a primer for understanding intellectual property law and a defense of intellectual property, Laws of Creation explains the basis of IP and its justification. 



According to Cass, not all would-be reformers share a similar guiding philosophy, distinguishing between those who support property rights but nevertheless have specific critiques of the intellectual property system as it currently stands, and reformers who do not see a place for property.



Cass explains that the current intellectual property system is neither wholly good nor wholly bad, but is a matter of weighing tradeoffs. On the whole, he argues, intellectual property benefits society. Cass also argues that intellectual property law in the U.S. is still more functional than that in other countries, such as Italy, and that, while it would benefit from some reform, it is fundamentally a workable system.



Download



Related Links


Laws of Creation: Property Rights in the World of Ideas , Cass and Hylton

Property Rights Systems and the Rule of Law, Cass

Tom W. Bell on Laws of Creation, Antitrust & Competition Policy Blog

Book Talk: Law of Creation: Property Rights in the World of Ideas, The Cato Institute



 •  0 comments  •  flag
Share on Twitter
Published on February 19, 2013 13:54

February 17, 2013

What “Big Bang Disruption” Says About Technology Policy

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture’s Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling “Big Bang Disruption.”



The project is looking at the emerging ecosystem for innovation based on disruptive technologies.  It expands on work we have done separately and now together over the last fifteen years.



Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition, especially in information-intensive industries–which is to say, these days, every industry.



The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything.  There will soon be more smartphones sold than there are people in the world.  And before long, each of over one trillion items in commerce will be added to the network.



The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly “undiscplined.”  It shouldn’t succeed.  But it does.



So when the disruptor arrives and takes off with a bang, often after a series of low-cost, failed experiments, incumbents have no time for a competitive response.  The old rules for dealing with disruptive technologies, most famously from the work of Harvard’s Clayton Christensen, have become counter-productive.   If incumbents haven’t learned to read the new tea leaves ahead of time, it’s game over.



The HBR article doesn’t go into much depth on the policy implications of this new innovation model, but the book we are now writing will.  The answer should be obvious.



This radical new model for product and service introduction underscores the robustness of market behaviors that quickly and efficiently correct many transient examples of dominance, especially in high-tech markets.



As a general rule (though obviously not one without exceptions), the big bang phenomenon further weakens the case for regulatory intervention.  Market dominance is sustainable for ever-shorter periods of time, with little opportunity for incumbents to exploit it.



Quickly and efficiently, a predictable next wave of technology will likely put a quick and definitive end to any “information empires” that have formed from the last generation of technologies.



Or, at the very least, do so more quickly and more cost-effectively than alternative solutions from regulation.  The law, to paraphrase Mark Twain, will still be putting its shoes on while the big bang disruptor has spread halfway around the world.



Unfortunately, much of the contemporary literature on competition policy from legal academics is woefully ignorant of even the conventional wisdom on strategy, not to mention the engineering realities of disruptive technologies already in the market.  Looking at markets solely through the lens of legal theory is, truly, an academic exercise, one with increasingly limited real-world applications.



Indeed, we can think of many examples where legacy regulation actually makes it harder for the incumbents to adapt as quickly as necessary in order to survive the explosive arrival of a big bang disruptor.  But that is a story for another day.



Much more to come.



Related links:



Creating a ‘Politics of Abundance’ to Match Technology Innovation,” Forbes.com.



Why Best Buy is Going out of Business…Gradually,” Forbes.com.



What Makes an Idea a Meme?“, Forbes.com



The Five Most Disruptive Technologies at CES 2013,” Forbes.com




 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2013 22:06

February 14, 2013

‘Technopanics’ Paper Published in Minn. Jour. of Law, Science & Tech

I’m excited to announce that the Minnesota Journal of Law, Science & Technology has just published the final version of my 78-page paper on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” My thanks to the excellent team at the Journal, who made the final product a much better paper than the one I turned into them! I poured my heart and soul into this article and hope others find it useful. It’s the culmination of all my work on technopanics and threat inflation in information policy debates, much of which I originally developed here in various essays through the years. In coming weeks, I hope to elaborate on themes I develop in the paper in a couple of posts here.



The paper can be found on the Minn. J. L. Sci. & Tech. website or on SSRN. I’ve also embedded it below in a Scribd reader. Here’s the executive summary:



Fear is an extremely powerful motivational force. In public policy debates, appeals to fear are often used in an attempt to sway opinion or bolster the case for action. Such appeals are used to convince citizens that  threats to individual or social well-being may be avoided only if specific steps are taken. Often these steps take the form of anticipatory regulation based on the precautionary principle. Such “fear appeal arguments” are frequently on display in the Internet policy arena and often take the form of a fullblown “moral panic” or “technopanic.”  These panics are intense public, political, and academic responses to the emergence oruse of media or technologies, especially by the young.  In the extreme, they result in regulation or censorship. While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.



This paper will consider the structure of fear appeal arguments in technology policy debates, and then outline how those arguments can be deconstructed  and refuted in both cultural and economic contexts. Several examples of fear appeal arguments will be offered with a particular focus on online child safety, digital privacy, and cybersecurity. The various factors contributing to “fear cycles” in these policy areas will be documented. To the extent that these concerns are valid, they are best addressed by ongoing societal learning, experimentation, resiliency, and coping strategies rather than by regulation. If steps must be taken to address these concerns, education- and empowerment-based solutions represent superior approaches to dealing with them compared to a precautionary principle approach, which would limit beneficial learning opportunities and retard technological progress.



Technopanics and Threat Inflation [Adam Thierer - Mercatus Center]






 •  0 comments  •  flag
Share on Twitter
Published on February 14, 2013 13:56

February 12, 2013

Christopher Yoo on the Internet’s changing architecture

Christopher Yoo

Christopher S. Yoo, the John H. Chestnut Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and author of the new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network, explains that the Internet that we knew in its early days—one with a client-server approach, with a small number of expert users, and a limited set of applications and business cases—has radically changed, and so it may be that the architecture underlying the internet may as well.



According to Yoo, the internet we use today barely resembles the original Defense Department and academic network from which it emerged. The applications that dominated the early Internet—e-mail and web browsing—have been joined by new applications such as video and cloud computing, which place much greater demands on the network. Wireless broadband and fiber optics have emerged as important alternatives to transmission services provided via legacy telephone and cable television systems, and mobile devices are replacing personal computers as the dominant means for accessing the Internet. At the same time, the networks comprising the Internet are interconnecting through a wider variety of locations and economic terms than ever before.



These changes are placing pressure on the Internet’s architecture to evolve in response, Yoo says. The Internet is becoming less standardized, more subject to formal governance, and more reliant on intelligence located in the core of the network. At the same time, Internet pricing is becoming more complex, intermediaries are playing increasingly important roles, and the maturation of the industry is causing the nature of competition to change. Moreover, the total convergence of all forms of communications into a single network predicted by many observers may turn out to be something of a myth. Policymakers, Yoo says, should allow room for this natural evolution of the network to take place.



Download



Related Links


The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network, Yoo

The Changing Patterns of Internet Usage, Yoo

Network Neutrality and the Economics of Congestion, Yoo

Cloud Computing: Architectural and Policy Implications, Yoo



 •  0 comments  •  flag
Share on Twitter
Published on February 12, 2013 03:00

February 11, 2013

FCC Incentive Auction: Will the FCC Pick a Winner Among Mobile Technologies?

Congress recently mandated that the Federal Communications Commission (FCC) make additional spectrum available through a novel incentive auction designed to transition television broadcast spectrum to mobile use. The FCC’s task is to adequately compensate television broadcasters for relinquishing their spectrum while ensuring such spectrum is rapidly transitioned to mobile uses that benefit consumers nationwide.



This will be the most challenging and complex auction design the FCC has ever attempted. The FCC cannot avoid the complexity inherent in this unique auction design, but it can emphasize simplicity and exercise restraint when considering the other service rules that will govern this new spectrum. To maximize its opportunity for success in this daunting circumstance, the FCC should leverage proven policies wherever possible.



Successful spectrum policies are critical to sustaining innovation, economic growth, and global competitiveness in the mobile era. Today, consumer demand for tablets and smartphones is straining the limits of mobile Internet capacity, which is threatening our nation’s lead in mobile innovation. The quickest and least costly way to increase mobile network capacity is to add spectrum, and the incentive auction is how the FCC intends to bolster our spectrum resources. The continued success of the mobile Internet thus depends on the success of the incentive auction, and the auction’s success depends on policy decisions that must be made by the FCC.



The world’s first incentive auction is not the time to reinvent spectrum policy from scratch. The FCC has already pioneered proven spectrum policies, including technological neutrality – a policy based on hard-won experience in the early days of cellular. Back in 1981, when the FCC established rules governing the first cellular service, it required that all cellular providers use a particular analog technology (the “AMPS” standard). The analog regulatory requirement remained in place for 27 years – long after its commercial viability had passed. By the time cellular providers were allowed to shut down their antiquated analog networks in 2008, many mobile broadband providers were already deploying 3rdgeneration mobile broadband technologies.



The FCC’s experience with the technological stagnation caused by its analog cellular requirement prompted it to adopt “flexible use” mobile policies beginning in 1993 with the Broadband PCS band and continuing through 2007 with the 700 MHz band. These flexible use policies allow spectrum licensees to make determinations regarding the services they will provide and the technologies they will use (including FDD- and TDD-based technologies) so long as they comply with the FCC’s technical rules. (For an explanation of FDD and TDD, see the “Technical Appendix” at the bottom of this post.)



Mobile service providers in the United States have generally supported the FCC’s technology neutral spectrum policies. For example, in 2004, when the FCC was considering a new band plan in the Broadband Radio Service, the “overwhelming majority” of commenters, including Sprint, argued that the band plan should be technology neutral. The FCC adopted Sprint’s view that the public interest would be best served by “not restricting the band to a particular technology,” which would allow “licensees and systems operators to deploy either FDD or TDD technology, and freely switch between the two as the technology develops and the marketplace demands evolve.”



Sprint recently reversed course. In its comments in the incentive auction proceeding, Sprint is asking the FCC to abandon its technology neutral approach to spectrum and mandate a TDD-only band plan. The result would be a regulatory prohibition on the deployment of FDD-based technologies in the new spectrum band. In addition to being inconsistent with proven policy, a TDD-only approach would prohibit proven practices. FDD-based deployments have predominated in the United States with excellent results. Though the rules governing the 700 MHz band are technology neutrality, the LTE deployments that made the U.S. the world leader in mobile broadband use FDD, and that trend is expected to continue.



The return to restrictive spectrum policies Sprint seeks would be unwise. Sprint’s attempts to emphasize the potential advantages of TDD technologies (e.g., dynamic downlink/uplink ratios) while minimizing their potential disadvantages (e.g., synchronization and coverage limitations) are largely irrelevant. Even if Sprint were “right” today (i.e., that the potential advantages of TDD technologies currently outweigh their potential disadvantages for all potential use scenarios), it could very well be “wrong” tomorrow. Spectrum technologies and uses can change more rapidly than regulations, and spectrum users should have the flexibility to adapt to those changes without seeking permission from the regulator – which is precisely the point of technology neutrality.



It seems counterintuitive for mobile operators to advocate for regulations that limit their future options. Unsurprisingly, the overwhelming majority of mobile providers commenting in the incentive auction proceeding last week supported the FCC’s proposal to adopt a band plan capable of supporting FDD-based technologies. Sprint was the only mobile provider that asked the FCC to limit the flexibility of mobile providers to deploy FDD networks.



I expect Softbank, the Japanese company that is acquiring Sprint, may have influenced this advocacy. Softbank was one of the founding members of the Global TD-LTE Initiative launched in 2011 and the first mobile provider to deploy a commercial TDD-based LTE network in Asia. To Softbank, advocating for a TDD-only band wouldn’t seem unusual: many Asian countries mandate the deployment of TDD-based technologies. But it is unusual here, where technology neutrality has helped make the United States the global leader in mobile broadband deployment.



There may be many valuable things we can learn from other countries around the world, but restrictive technology policies aren’t among them. We have several decades of our own experience with the damage that such policies can do to innovation, and nothing indicates that importing them today would produce different results.



Technical Appendix

What’s the difference between FDD and TDD technologies? A mobile radio can’t transmit and receive on the same frequency simultaneously. Two-way transmissions must be divided in some way. In mobile systems, they are either divided by frequency (FDD) or time (TDD).



FDD



Frequency Division Duplex (FDD) allows simultaneous communications in both directions using two (or more) frequency bands that are divided by a “guard band” (also known as a duplex band or frequency offset). One or more frequency bands are used for communications from “base stations” to subscribers (the downlink), and another frequency band is used for communications from subscribers to base stations (the uplink).



An analogy for FDD is a two-lane road that allows cars traveling in opposite directions to pass one another safely.



TDD



Time Division Duplexing (TDD) uses the same frequency band for downlink and uplink transmissions but divides them into different timeslots. Instead of using a “guard band” to divide the downlink and uplink frequencies, TDD uses a “guard interval” to divide the transmissions by time. TDD systemsemulate simultaneous communications by using brief timeslots that are not perceived by users.



A TDD system is analogous to a one-lane road with traffic controllers at each end.



FDD TDD Comparison



Both FDD and TDD have their advantages and disadvantages.






Attribute
FDD
TDD


Spectrum
Requires two divided channels
Requires only one channel


Traffic
Uplink and downlink capacity is fixed by channel sizes (though channel sizes may be asymmetric)
Can adjust the ratio of uplink and downlink capacity


Distance
Not affected by distance
Guard interval increases with distance as signal propagation time increases, which decreases channel efficiency


Latency
No additional latency (Tx and Rx channels are always available)
Latency may increase with multiplexing due to switching times (between Tx & Rx)


Equipment
Additional filters and a duplexer are required
No significant additional costs (though a switch is required)


Synchronization
Adjacent channel interference is much lower
Systems must be synchronized to avoid adjacent channel interference




A detailed discussion of the tradeoffs between FDD and TDD systems is beyond the scope of this blog post. In general, however, FDD may be preferable for mobile applications designed to cover longer distances when traffic is relatively balanced and synchronization issues are likely; TDD may be preferable for mobile applications designed to cover shorter distances when traffic is relatively unbalanced and synchronization issues are limited.




 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2013 05:46

February 8, 2013

The Telecommunications Act of 1996 Turns 17 with No Future Plans

Today marks the seventeenth birthday of the Telecommunications Act of 1996. Since it became law nearly two decades ago, the 1996 Act has largely succeeded in meeting its principal goals. Ironically, its success is becoming its potential failure.



By the time most teenagers turn seventeen, they have already begun planning their future after high school. Their primary school achievements are only a beginning in a lifetime of future possibilities. For most legislation, however, there is no future after the initial goals of Congress are achieved. Fortunately, the seventeen year-old 1996 Act isn’t like most legislation.



Congress recognized that when the goals of the 1996 Act were achieved, many of its regulations would no longer be necessary. In its wisdom, Congress provided the FCC with statutory authority to adapt our communications laws to future changes in the communications market. This authority includes the ability for the FCC to forbear from applying an unnecessary or outdated law.



Unfortunately, the FCC has been very reluctant to exercise this authority. It has instead preferred to remain within the familiar walls of stagnant regulations while the opportunity of Internet transformation knocks on the door. If the FCC refuses to use its forbearance authority, the only future for the 1996 Act is to live in the proverbial parents’ basement and eat 20th Century leftovers. If the FCC instead chooses to act, it could accelerate investment in new broadband infrastructure and the transition to an all-Internet future.



Looking Back: The 1996 Act Changed the Economic Model



Ironically, the 1996 Act was intended to prevent its older sibling, the Communications Act of 1934, from facing a similarly bleak future. The 1934 Act was premised on the theory that communications networks are natural monopolies. That theory began eroding in the 1960s when competitors to the old telephone monopoly began providing “long distance” and “competitive access services” (e.g., “special access”), which proved that facilities-based competition among multiple communications networks was sustainable. Although competition had developed in the long distance and special access markets, restrictions on competition in the 1934 Act, antitrust judgments, and state laws ensured that local markets remained closed to new entrants.



The 1996 Act was intended to transition our communications laws and regulations from the era of natural monopoly to an era of market competition by eliminating barriers to entry at the local level. At the signing ceremony, former President Clinton recognized that the “information revolution” was changing the old economic model and required a new regulatory approach:



 But this revolution has been held back by outdated laws, designed for a time when there was one phone company, three TV networks, no such thing as a personal computer. Today, with the stroke of a pen, our laws will catch up with our future. We will help to create an open marketplace where competition and innovation can move as quick as light.


Ironically, the changes wrought by the 1996 Act and Internet transformation are now threatening its legacy.



Looking Today: The 1996 Act Has Largely Met Its Principal Goals



The transition from natural monopoly to market competition envisioned by the 1996 Act is now largely complete. In its initial order implementing the Act, the FCC summarized the 1996 Act’s three principal goals for competition as follows:



 (1) opening the local exchange and exchange access markets to competitive entry; (2) promoting increased competition in telecommunications markets that are already open to competition, including the long distance services market; and (3) reforming our system of universal service so that universal service is preserved and advanced as the local exchange and exchange access markets move from monopoly to competition.


These goals have largely been met.




Opening Local Markets: In 1996, many Americans had access to only one (1) local telephone network. Today, the FCC reports that more than 90% of residential Americans have access to at least six (6) facilities-based networks that provide affordable local voice services (a “telephone” company, a “cable operator”, and at least 4 mobile providers). Multiplying consumer choice by six is more than enough to conclude that local markets have been opened to competition.
Increasing Competition in Long Distance: In 1996, there were four (4) facilities-based nationwide long distance networks: AT&T, MCI, Sprint, and WorldCom. Today, there are at least eleven (11) Tier 1 backbone providers (three more than in 2005). Though this market has been largely unregulated since the FCC completed its Section 271 market-opening proceedings in the early 2000s, the increase in facilities-based nationwide networks from four to eleven demonstrates increased competition in non-local markets since 1996.
Universal Service: The FCC initiated a comprehensive overhaul of its universal service and intercarrier compensation mechanisms in 2011. Though the USF/ICC Transformation Order is an important step in reforming these policies, the process of achieving this goal of the 1996 Act is still ongoing. (Ironically, progress in achieving this goal may have been hindered by other outdated assumptions in the Act.)


Looking Forward: The 1996 Act’s Forgotten Future



The world envisioned by the 1996 Act is one in which all providers will have new competitive opportunities as well as new competitive challenges.” In its initial order implementing the Act, the FCC recognized that opening all communications markets to all providers would “blur traditional industry distinctions.” It also recognized that, “given the dynamic nature of telecommunications technology and markets, it will be necessary over time to review proactively and adjust these rules to ensure both that the statute’s mandate of competition is effectuated and enforced, and that regulatory burdens are lifted as soon as competition eliminates the need for them.”



At times it seems the FCC has forgotten these words and the future Congress had envisioned. Seventeen years later competition has clearly eliminated the need for scores of regulatory burdens, yet those burdens remain in place. Rather than embrace ongoing market changes, the FCC has relied on sophistry to deny reality. For example, in a 2010 order denying a forbearance petition filed by Qwest, the FCC concluded that mobile consumers that have “cut the cord” shouldn’t be counted when calculating residential voice market shares absent an economic analysis showing that mobile service “constrains the price of residential wireline service.” With approximately 40% of U.S. households relying entirely on mobile for their voice service, a better question may be whether wireless-only consumers would be willing to purchase Qwest’s plain old telephone service at any price.



It’s time to stop moving the goal posts. With the exception of universal service (a work in progress), the competitive goals of the 1996 Act have already been met. Before the 1996 Act turns eighteen and finds itself stuck in the proverbial basement, the FCC should establish new goals for the Internet era and use its statutory authority, including forbearance, to start achieving those goals. If the FCC’s Technology Transitions Policy Task Force moves quickly, we might have something to celebrate on the 1996 Act’s eighteenth birthday.




 •  0 comments  •  flag
Share on Twitter
Published on February 08, 2013 11:22

On Mandating “Simplified” Privacy Policies

Via a Twitter post this morning, privacy lawyer Stephen Kline (@steph3n) brings to my attention this new California bill that “would require the privacy policy [of a commercial Web site or online
service] to be no more than 100 words, be written in clear and concise language, be written at no greater than an 8th grade reading level, and to include a statement indicating whether the personally identifiable information may be sold or shared with others, and if so, how and with whom the information may be shared.”



I’ve always been interested in efforts — both on the online safety and digital privacy fronts — to push for “simplified” disclosure policies and empowerment tools. Generally speaking, increased notice and simplified transparency in these and others contexts is a good norm that companies should be following. However, as I point out in a forthcoming law review article in the Harvard Journal of Law & Public Policy, we need to ask ourselves whether the highly litigious nature of America’s legal culture will allow for truly “simplified” privacy policies. As I note in the article, by its very nature, “simplification” likely entails less specificity about the legal duties and obligations of either party. Consequently, some companies will rightly fear that a move toward more simplified privacy policies could open them up to greater legal liability. If policymakers persist in the effort to force the simplification of privacy policies, therefore, they may need to extend some sort of safe harbor provision to site operators for a clearly worded privacy policy that is later subject to litigation because of its lack of specificity. If not, site operators will find themselves in a “damned if you do, damned if you don’t” position: Satisfying regulators’ desire for simplicity will open them up to attacks by those eager to exploit the lack of specificity inherent in a simplified privacy policy.



Another issue to consider comes down to simple bureaucratic sloth: Mandatory “simplification” efforts means a team of bureaucrats somewhere in this world — in this case in Sacramento, California, I guess — will have to become code cops. Websites and apps will suddenly become subject to a new regulatory regime and all that it entails. So, even if those enterprising trial lawyers don’t get online innovators first, the bureaucrats could make their lives miserable with reams of red tape over time (especially because it would be silly to think that this sort of meddling with end with “simplification” mandates.) That could mean a lot less “permissionless innovation” and many more “Mother May, I?” permissioned proceedings instead.



Further, do we really want such Internet mandates to spring from the state-level? As I noted in my recent essay on “The Perils of Parochial Privacy Policies,” such state-based Internet meddling — even when well-intentioned — could quickly become a confusing morass of over-lapping, contradictory rules. Fifty different state Internet Bureaus aren’t likely to help the digital economy or serve the long-term interests of consumers. It could also open the door to potential Net-meddling on other fronts (online free speech, copyright, cybersecurity, online authentication, etc.) If “simplified” policies can be mandated at the state level for privacy, why not everything else? So, some degree of preemption may be in order here. If the movement of digitized bits across the Net isn’t “interstate commerce,” then I don’t know what is.



Just as an aside, it’s worth pointing out that simply because consumers do not necessarily read or understand every word of a company’s privacy policy does not mean that “market failure” exists. In my forthcoming Harvard Journal piece I discuss how disclosure policies or labeling systems work in other contexts and note that it is highly unlikely that consumers read or fully understand every proviso contained in the stacks of paper placed in front of them when they sign home mortgages, life insurance policies, or car loans and warranties. Such documents are full of incomprehensible provisions and stipulations, even though regulations govern many of these contracts. In these cases, I could argue that consumers face far more “risk” than they face by not fully comprehending online privacy policies. But life goes on. Consumers will never be perfectly informed in these or other contexts because they are busy with other things. In a similar way, a certain amount of “rational ignorance” about privacy policies should be expected.



Let me close by reiterating that increased notice and transparency in privacy and data collection/use policies is generally a good operational norm. But not every smart norm makes a smart law, and in this case there are some thorny unintended consequences that must be considered when policymakers propose “simplifying” privacy policies via state-based regulatory mandates.



[On a related note, my colleague Jerry Brito brought to my attention this interesting 2011 NPR piece on "Why Are Credit Card Agreements So Long?]




 •  0 comments  •  flag
Share on Twitter
Published on February 08, 2013 07:35

The Brookings Patent Report is Bogus

Brookings has a new report out by Jonathan Rothwell, José Lobo, Deborah Strumsky, and Mark Muro that “examines the importance of patents as a measure of invention to economic growth and explores why some areas are more inventive than others.” (p. 4) Since I doubt that non-molecule patents have a substantial effect on growth, I was curious to examine the paper’s methodology. So I skimmed through the study, which referred me to a technical appendix, which referred me to the authors’ working paper on SSRN.



The authors are basically regressing log output per worker on 10-year-lagged measures of patenting in a fixed effects model using metropolitan areas in the United States.





Continue reading on elidourado.com…




 •  0 comments  •  flag
Share on Twitter
Published on February 08, 2013 06:58

February 5, 2013

Eli Dourado on WCITLeaks and internet governance

http://surprisinglyfree.com/wp-content/uploads/Eli-Dourado.png

Jerry Brito and WCITLeaks co-creator Eli Dourado have a conversation about the recent World Conference on International Telecommunications (WCIT), a UN treaty conference that delved into questions of Internet governance.



In the lead-up to WCIT—which was convened to review the International Telecommunication Regulations (ITRs)—access to preparatory reports and proposed modifications to the ITRs was limited to International Telecommunications Union (ITU) member states and a few other privileged parties. Internet freedom advocates worried that the member states would use WCIT as an opportunity to exert control over the Internet. Frustrated by the lack of transparency, Brito and Dourado created WCITLeaks.org, which publishes leaked ITU documents from anonymous sources.



In December, Dourado traveled to Dubai as a member of the U.S. delegation and got an insider’s view of the politics behind international telecommunications policy. Dourado shares his experiences of the conference, what its failure means for the future of Internet freedom, and why the ITU is not as neutral as it claims.



Download



Related Links


WCITLeaks, Brito and Dourado

Final Acts of the World Conference on International Telecommunications, ITU

Behind closed doors at the UN’s attempted “takeover of the Internet”:
Conflicting visions for the future of the Internet collide in Dubai.
, Ars Technica

WCIT is about People vs. Their Governments, The Technology Liberation Front



 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2013 03:00

February 4, 2013

All you need to know about “Super Wi-Fi” in one tweet

The D.C. tech world is abuzz today over a front page story in The Washington Post by Cecilia Kang announcing an exciting new plan from the FCC “to create super WiFi networks across the nation, so powerful and broad in reach that consumers could use them to make calls or surf the Internet without paying a cellphone bill every month.”



“Designed by FCC Chairman Julius Genachowski,” Kang explains “the plan would be a global first.” And that’s not all: “If all goes as planned, free access to the Web would be available in just about every metropolitan area and in many rural areas.” Wow. Nationwide internet access for all and at no charge?!



Aggregators have run with this amazing news, re-reporting Kang’s amazing scoop. Here’s Mashable:




The proposal, first reported by The Washington Post, would require local television stations and broadcasters to sell wireless spectrum to the government. The government would then use that spectrum to build public Wi-Fi networks.




And here’s Business Insider:




The Federal Communications Commission wants to create so-called “super WiFi” networks all over the United States, sending the $178 billion telecom industry scrambling, The Washington Post‘s Cecilia Kang reports. … Under the proposal, the FCC would provide free, baseline WiFi access in “just about every metropolitan area and in many rural areas” using the same air wave frequencies that empower AM radio and the broadcast television spectrum.




Free Wi-Fi networks, folks! Wow, what an amazing new plan. But, wait a minute. Who is going to pay for these free nationwide networks? They’ve got to be built after all. Hmmm. It doesn’t seem like the article really explains that part. The cool thing about living in the future, though, is that you can just ask for clarification. So, DSLReport’s Karl Bode asked Kang:





Oh. You mean there’s no new plan? It’s the same incentive auction NPRM we’ve been talking about for months? And the only thing that’s new are (largely predictable) public comments filed last week? Well that’s a bummer. Not to worry, though, I’m sure the WaPo and Mashable and Business Insider and all the rest will be quick to clarify all of the confusion.



UPDATE: Parsing Kang’s story a little bit more since posting this, I’ve become even more confused. In her tweet she says she’s talking about the white spaces in the incentive auction NPRM, but those couldn’t possibly be used for a nationwide wireless network since they’d be low-power Part 15 type bands. Also, unlicensed in the 600 MHz guard bands are not Chairman Genachowski’s design, they were allowed by Congress when they gave the FCC auction authority. So what is Kang referring to? Most likely it is the Chairman’s initiative, announced at CES earlier this month, to clear 195 MHZ in the 5 GHz band to improve Wi-Fi. Now if that’s what Kang is talking about, then how does that square with this description of the spectrum in her piece:




The airwaves that FCC officials want to hand over to the public would be much more powerful than existing WiFi networks that have become common in households. They could penetrate thick concrete walls and travel over hills and around trees. If all goes as planned, free access to the Web would be available in just about every metropolitan area and in many rural areas.




That kind of description is usually reserved for low frequency bands like the 600 MHz bands in the incentive auction (which is what Kang said she’d referring to). Bottom line, I think Kang conflated two separate proceedings into one big non-story that made it past the Washington Post‘s editors all the way to the top left corner of the front page. I hope there is a correction tomorrow.




 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2013 12:57

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.