Adam Thierer's Blog, page 81

November 8, 2012

Event Next Week: Previewing the World Conference on International Telecommunication

As some of you know, I’ve been closely following the World Conference on International Telecommunication, an international treaty conference in December that will revise rules, for example, on how billing for international phone calls is handled. Some participants are interested in broadening the scope of the current treaty to include rules about the Internet and services provided over the Internet.



I haven’t written much publicly about the WCIT lately because I am now officially a participant—I have joined the US delegation to the conference. My role is to help prepare the US government for the conference, and to travel to Dubai to advise the government on the issues that arise during negotiations.



To help the general public better understand what we can expect to happen at WCIT, Mercatus has organized an event next week that should be informative. Ambassador Terry Kramer, the head of the US delegation, will give a keynote address and take questions from the audience. This will be followed by what should be a lively panel discussion between me, Paul Brigner from the Internet Society, Milton Mueller from Syracuse University, and Gary Fowlie from the ITU, the UN agency organizing the conference. The event will be on Wednesday, November 14, at 2 pm at the W hotel in Washington.



If you’re in the DC area and are interested in getting a preview of the WCIT, I hope to see you at the event on Wednesday. Be sure to register now since we are expecting a large turnout.




 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2012 08:24

AT&T Accelerates Internet Transformation with Massive Investment in Broadband Infrastructure

Yesterday AT&T announced that it would invest an additional $14 billion in the next three years to expand its 4G LTE network to cover 300 million people and expand its wired all-IP broadband infrastructure to 75 percent of its customer locations throughout its 22-state wired service area. For many consumers, this investment will provide their first opportunity for access to high-speed broadband at home. For many others, it will provide their first opportunity to make a choice among competing providers of high-speed broadband services. This impressive commitment to transition outdated communications infrastructure to an all-IP future will benefit millions of consumers and accelerate our Internet transformation nationwide.



This plan offers an opportunity for the newly reelected Administration and Democratic victors in the Senate as well. By announcing plans to invest $14 billion in the U.S. economy the day after President Obama was reelected and the Democratic party increased its representation in the Senate, AT&T expressed its confidence in the leadership of the President and the Democratic party to move our economy forward. This should be especially welcome at a time when many American companies are investing their dollars overseas, which serves to weaken us while it strengthens our global competitors. It also allows the President to show progress on economic development and infrastructure modernization beginning on day one of his new term, which will continue until shortly before his legacy era begins.



The investment also provides the FCC with an opportunity to fulfill its vision of universal access to broadband services. The National Broadband Plan recognized that meeting our broadband goals would require a combination of significant private investment in more densely populated areas and public support in areas with low population density. AT&T expects its investment will provide high-speed broadband Internet access to 99 percent of customer locations in its wired service area. Its plan would significantly reduce the size of the broadband availability gap in 22 states and simplify the FCC’s efforts to administer the new Connect America Fund.



AT&T’s plans would also fulfill the promise of the WCS spectrum band at 2.3 GHz, which lay largely fallow for over a decade due to the potential for harmful interference to satellite radio. The FCC recently adopted an order resolving those concerns largely in accordance with a commercial agreement between AT&T and Sirius-XM that enables the use of 20 MHz of spectrum for mobile and 10 MHz of spectrum for fixed broadband nationwide. AT&T is seeking FCC authority to acquire a nationwide footprint in the band, which would presumably be used to support the 4G LTE deployment described in today’s announcement.



Given the extraordinary benefits of this significant investment in our infrastructure and America’s future, one would expect the announcement would be met with universal praise – except universal praise doesn’t create trendy news.



David Goldman at CNNMoney accuses AT&T of lying to regulators last year about its spectrum needs during the review of its transaction with T-Mobile. Mr. Goldman believes it was obvious that AT&T would be able to negotiate a deal resolving a decade-long dispute over harmful interference issues in the 2.3 GHz band to create alternate spectrum supporting 4G deployment. Hindsight is 20/20. Similar to analysts who thought LightSquared could easily resolve complex interference issues in the L-band, Mr. Goldman grossly underestimates the difficulty in resolving these types of interference issues and the ongoing challenges involved in deploying and operating networks in such spectrum.



Stacy Higginbotham at Gigaom also managed to spin the announcement negatively. She described the $14 billion investment as a “fringe benefit” that would “leave rural America behind.” But she doesn’t say whom rural Americans would be left standing behind. Themselves? AT&T’s plan would provide 99 percent of homes in AT&T’s wired service area with access to all-IP infrastructure delivering high-speed broadband Internet. I doubt Ms. Higginbotham would suggest that we deny high-speed broadband Internet access to millions of consumers, including rural consumers, because the last one percent might require universal service funding. (Note that GigaOM’s founder, Om Malick, embraced the deployment of new technology, albeit with bittersweet emotions.)



The IP-transition will undoubtedly involve some disruption to existing services and require updated regulations. But such disruptions are an inherent part of progress and have been successfully overcome with minimal impact in the previous DTV and digital wireless transitions. With its recent experience in these earlier transitions, I am confident the FCC can enable a similarly smooth transition from switched telephone networks to all-IP networks, and now is the time to get started. The benefits of the transition to consumers and the economy are too great for the FCC to wait any longer.



This is an updated cross-post from the Communications Liberty and Innovation Project blog that was posted yesterday evening.




 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2012 07:19

November 6, 2012

Chris Anderson on 3D Printing and the Maker Movement

Chris Anderson

Chris Anderson, former Wired magazine editor-in-chief and author of Makers: The New Industrial Revolution, describes what he calls the maker movement.



According to Anderson, modern technologies, such as 3D printing and open source design, are democratizing manufacturing. The same disruption that digital technologies brought to information goods like music, movies and publishing will soon make its way to the world of physical goods, he says.



Anderson tells the story of his grandfather, who designed the first automatic sprinkler system in the 60s, and how different such an invention story would be today. He also discusses his own firm, 3D Robotics, and policy challenges to emerging manufacturing technology.



Download



Related Links


Makers: The New Industrial Revolution, by Anderson
Wired Editor-in-Chief Chris Anderson Steps Down to Run Robotics Startup, Wired
Wired’s Anderson on 3D Robotics Drone Sales, Bloomberg



 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2012 07:50

November 4, 2012

The Precautionary Principle Meets Driverless Cars in DC

The precautionary principle generally states that new technologies should be restricted or heavily regulated until they are proven absolutely safe. In other words, out of an abundance of caution, the precautionary principle holds that it is “better to be safe than sorry,” regardless of the costs or consequences. The problem with that, as Kevin Kelly reminded us in his 2010 book, What Technology Wants, is that because “every good produces harm somewhere… by the strict logic of an absolute Precautionary Principle no technologies would be permitted.” The precautionary principle is, in essence, the arch-enemy of progress and innovation. Progress becomes impossible when experimentation and trade-offs are considered unacceptable.



I was reminded of that fact while reading this recent piece by Marc Scribner in the Washington Post, “Driverless Cars Are on the Way. Here’s How Not to Regulate Them.” Scribner highlights the efforts of the D.C. Council to regulate autonomous vehicles. A new bill introduced by Council member Mary Cheh (D-Ward 3) proposes several preemptive regulations before driverless autos would be allowed on the streets of Washington. Scribner summarizes the provisions of the bill and their impact:




“requires that a licensed driver be present in the driver’s seat of these vehicles. While seemingly inconsequential, this effectively outlaws one of the more promising functions of autonomous vehicle technology: allowing disabled people to enjoy the personal mobility that most people take for granted.”
“requires that autonomous vehicles operate only on alternative fuels…. [which] could delay the technology’s widespread adoption for no good reason.”
“would impose a special tax on drivers of autonomous vehicles” which would “greatly restrict[] the use of a potentially revolutionary new technology by singling it out for a new tax system.”


The first of these provisions is the one the one that most closely resembles the traditional Precautionary Principle, but the other provisions are based on a similar instinct that progress can be preemptively planned. Yet, as Scribner correctly notes,



no one knows precisely how autonomous vehicle technology will develop or be adopted by consumers. Cheh’s bill presumes to predict and understand these future complexities and then imposes a regulatory straitjacket based on those assumptions. . . . Cheh’s bill will unduly restrict many promising vehicle features, prevent the wider voluntary adoption of this promising technology through foolish green-government paternalism and create a new tax system without proper consideration.


That’s exactly right and it’s the perfect answer to those who advocate the precautionary principle mindset. Trying to dictate progress and safety from above sometimes means you’ll get less of both.




 •  0 comments  •  flag
Share on Twitter
Published on November 04, 2012 11:04

November 3, 2012

Obama Lags House Republicans on Data Transparency

It’s time to roll out transparency grades!



This isn’t anything innovative, but part of my strategy for improving government transparency is to give public recognition to the political leaders who get ahead on transparency and public disapprobation to those who fall behind. So I have a Cato Institute report coming out Monday that assesses how well government data is being published. (Oversight data, that is: reflecting deliberations, management, and results.)



I went ahead and previewed it on the Cato blog last night. The upshot? I find that President Obama lags House Republicans in terms of data transparency.



Neither are producing stellar data, but Congress’s edge is made more acute by the strong transparency promises the president made as a campaigner in 2008, which are largely unrealized. My pet peeve is the lack of a machine-readable government organization chart, not even at the agency and bureau level. The House is showing modest success and promising signs with some well structured data at docs.house.gov and good potential at beta.congress.gov.



I hustled to get these grades out before the election, and maybe there are one or two marginal voters who this study might sway. How it might sway them is an open question, and I’ve had some interesting reaction to the release of the study, such as: Is this electioneering? Shouldn’t there be an assessment of Romney on transparency?



It’s not electioneering, which is advocating for a specific candidate or party. The study says nothing about what to do with the information it provides. I do believe politicians should be held to account for their transparency practices. The primary way politicians are held accountable is at the ballot box. Thus, communicating to the public about the performance of public officials in a given area at election time is one of the best ways to affect their behavior.



The methodology used in this report gives us the ability to track progress going forward, and it creates better incentives for improvement because you can tie the quality of actual important data to the officials responsible for it. But it doesn’t allow us to go back in time and grade the condition of data in the past (barring a huge effort to recreate what resources were available). And it doesn’t allow us to grade candidates for office, who don’t have any responsibility for any data we care about. So I can say, because I believe it, that President Obama is almost certainly better than President Bush was, and I’ve heard that Mitt Romney was bad on transparency as a governor. But I don’t have data to confirm these things.



We’ll do this study again—and better!—in two years, and again in four. We will be measuring progress and calling it out for the public to consider. We’ve put together a pretty good methodology for assessing data publication, I think, and the division of responsibility for data among political leaders is pretty clear. So this instrument will be a way for the public to assess progress on something they want.



Thanks to the folks at GovTrack.us, the National Priorities Project, OMB Watch, and the Sunlight Foundation, who helped me review the government’s data publication practices. (Their help does not imply agreement with MY conclusions.)




 •  0 comments  •  flag
Share on Twitter
Published on November 03, 2012 13:33

November 2, 2012

Roosevelt Tried To Abolish the FCC

No doubt you are aware that the Communications Act of 1934 eastablished the Federal Communications Commission, which has profoundly affected the broadcast, cable, telecommunications and satellite industries.  You will recall that the legislation was signed into law by President Franklin D. Roosevelt.  What you may not realize is that President Roosevelt made two subsequent attempts to abolish the Federal Communications Commission.



On Jan. 23, 1939, Roosevelt wrote identical letters to Senator Burton K. Wheeler and Congressman Clarence F. Lea urging dramatic FCC reform.



I am thoroughly dissatisfied  with the present legal framework and administrative machinery of the [Federal Communications] Commission.  I have come to the definite conclusion that new legislation is necessary to effectuate a satisfactory reorganization of the Commission.

New legislation is also needed to lay down clear Congressional policies on the substantive side – so clear that the new administrative body will have no difficulty in interpreting or administering them.

I very much hope that your committee will consider the advisability of such new legislation.


Although proposals for FCC reorginization were introduced at the time, Congress did not act.  Then World War II intervened.  It wasn’t until 1996 that Congress “comprehensively” updated the 1934 Act.  But the 104th Congress left the “present legal framework and administrative machinery of the Commission” intact, and it failed to to “lay down clear Congressional policies on the substantive side.”



Roosevelt wanted to transfer the functions of all independent agencies like the FCC to cabinet departments.  A 1937 initiative for this purpose failed.  Two years later, Roosevelt took aim at the FCC directly.



Roosevelt’s specific issues with the FCC of the 1930s are a subject for a subsequent essay (they were primarily on the radio side, although also relevant to the telephone side).  In any event, his 1939 letter reinforces a libertarian critique of the 1934 act.  The law was overly broad and created too much room for the FCC to establish its own  policy preferences instead of serving to enforce the policies of elected congressional representatives and the president.



Althouth well-intentioned, the FCC (even to its most famous creator) was a disapointment and a mistake.  The 113th Congress should carefully consider the 32nd president’s advice.




 •  0 comments  •  flag
Share on Twitter
Published on November 02, 2012 00:13

November 1, 2012

LPFM Will Likely Fail Again, Unfortunately

“All this top-40s music sounds the same.”  I think we’ve all heard this sentiment.  The nature of regional radio broadcasting almost requires a regression to the mean in musical tastes.  A radio station cannot be all things to all people.  I suspect most people will be surprised to learn that some of the most innovative radio broadcasts are taking place at hundreds of stations across the country—and only few people can listen to them.  These stations, known as low power FM (LPFM), carry niche programming like independent folk rock music, fishing shows, political news, reggae, blues, and religious programming.  (And one station in Sitka, Alaska consists entirely of a live feed of whale sounds.)



The FCC began licensing LPFM stations in 2000.  These tiny stations typically cost under $10,000 to create but by law have the power to broadcast their signals only 3.5 miles out (the typical full power FM station has a 26-mile range).  Because of their limited listening area and alternative formats, LPFM stations have small but loyal audiences.



Using traditional FCC station spacing rules, over 100,000 LPFM stations potentially could be broadcasting in the United States.  Yet, despite the FCC’s hopes of “thousands of new voices” on the airwaves, today the number of LPFM stations is less than 1,000.  To this day, there’s only one LPFM station located in a top 50 media market, where most radio audiences live.  Why, more than a decade after these stations were first allowed, are so few in existence?



When faced with the regulatory restrictions imposed on LPFM stations it’s clear why there is so much untapped potential.  Power limits aside, LPFM stations are subject to onerous ownership and advertising rules that were pushed (typically) by the leftist media groups who lobbied for them.  LPFM stations can be licensed only to local entities, and those entities cannot own more than one station.



Further—and most limiting—stations must be noncommercial.  Despite their hyperlocal appeal, LPFM stations are prohibited from running advertisements from local restaurants, churches, retail stores, and car dealers.  Constrained to relying mostly on donations and volunteer staff, few stations ever get on the air.



Several forces conspired to bring about these crippling restrictions.  The FCC has long pushed for “localism” in broadcast radio, thus the local ownership restrictions.  Further, many of the activists who pushed for LPFM stations are suspicious of large commercial enterprises and wanted noncommercial mandates.  These groups envisioned a nationwide network of nonprofit cooperatives broadcasting music and news for those with alternative tastes.  They unwittingly ensured that such a development would never become reality, outside of a few rural college towns.  (It’s also deliciously ironic, and likely torments the leftist groups that pushed for LPFM, that most stations seem to be church-affiliated.  But what other national nonprofit organizations can run stations comprised mainly of volunteers using donations?)



Additionally, the full power FM stations we all listen to in the car saw diminishing market share in their futures if upstart companies were able to string together several LPFM stations and siphon off some of their ad revenues and audience.  When it became obvious that LPFM was going forward a decade ago, I imagine full power stations didn’t object to the FCC and the activist groups’ efforts to make LPFM noncommercial and local.



Suddenly, this year, people are excited about LPFM again.  You see, after LPFM licensing began over a decade ago, the FCC and activists quickly saw that their vision of thousands of new stations wasn’t realistic.  In light of the disappointing launch, the FCC lobbied Congress for years to expand more LPFM licensing opportunities.  In response, Congress passed the Local Community Radio Act in 2011, which only marginally expanded opportunities for LPFM licensees.  I wish new LPFM applicants the best, but I don’t think there’s any reason to be excited.



First, the impact of the 2011 law is minimal and shows the futility of the FCC playing catch up to the marketplace.  The process to approve more LPFM stations took years.  In the meantime, listeners have several platforms for instant music access, including Pandora and Spotify streaming, iTunes, Sirius-XM radio, and cloud computing music storage.  And with the increasing popularity of smartphones, it has never been easier to have personalized, portable music selections.



Still, the FCC and the Congress spent years only nibbling at the edges of the matured broadcast radio market.  The modest change in the 2011 law, recently implemented, might enable dozens or perhaps a few hundred more stations.  But when 100,000 LPFM stations is the approximate ceiling, it’s clear how little things have really changed.  The activists will blame Big Radio for limiting LPFM in Big Radio’s markets, but the blame belongs equally to the noncommercial mandates.



LPFM stations, with liberalized rules that allow them to accept commercial sponsors and band together, could provide a dynamic alternative to the current music and radio broadcast landscape.  After a decade of filings, notices, and rule changes, the FCC and the activists will have LPFM is right back where it started—small, isolated, and rare.




 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2012 16:13

LPFM Will Likely Fail Again

“All this top-40s music sounds the same.”  I think we’ve all heard this sentiment.  The nature of regional radio broadcasting almost requires a regression to the mean in musical tastes.  A radio station cannot be all things to all people.  I suspect most people will be surprised to learn that some of the most innovative radio broadcasts are taking place at hundreds of stations across the country—and only few people can listen to them.  These stations, known as low power FM (LPFM), carry niche programming like independent folk rock music, fishing shows, political news, reggae, blues, and religious programming.  (And one station in Sitka, Alaska consists entirely of a live feed of whale sounds.)



The FCC began licensing LPFM stations in 2000.  These tiny stations typically cost under $10,000 to create but by law have the power to broadcast their signals only 3.5 miles out (the typical full power FM station has a 26-mile range).  Because of their limited listening area and alternative formats, LPFM stations have small but loyal audiences.



Using traditional FCC station spacing rules, over 100,000 LPFM stations potentially could be broadcasting in the United States.  Yet, despite the FCC’s hopes of “thousands of new voices” on the airwaves, today the number of LPFM stations is less than 1,000.  To this day, there’s only one LPFM station located in a top 50 media market, where most radio audiences live.  Why, more than a decade after these stations were first allowed, are so few in existence?



When faced with the regulatory restrictions imposed on LPFM stations it’s clear why there is so much untapped potential.  Power limits aside, LPFM stations are subject to onerous ownership and advertising rules that were pushed (typically) by the leftist media groups who lobbied for them.  LPFM stations can be licensed only to local entities, and those entities cannot own more than one station.



Further—and most limiting—stations must be noncommercial.  Despite their hyperlocal appeal, LPFM stations are prohibited from running advertisements from local restaurants, churches, retail stores, and car dealers.  Constrained to relying mostly on donations and volunteer staff, few stations ever get on the air.



Several forces conspired to bring about these crippling restrictions.  The FCC has long pushed for “localism” in broadcast radio, thus the local ownership restrictions.  Further, many of the activists who pushed for LPFM stations are suspicious of large commercial enterprises and wanted noncommercial mandates.  These groups envisioned a nationwide network of nonprofit cooperatives broadcasting music and news for those with alternative tastes.  They unwittingly ensured that such a development would never become reality, outside of a few rural college towns.  (It’s also deliciously ironic, and likely torments the leftist groups that pushed for LPFM, that most stations seem to be church-affiliated.  But what other national organizations can run stations comprised mainly of volunteers using donations?)



Additionally, the full power FM stations we all listen to in the car saw diminishing market share in their futures if upstart companies were able to string together several LPFM stations and siphon off some of their ad revenues and audience.  When it became obvious that LPFM was going forward a decade ago, I imagine full power stations didn’t object to the FCC and the activist groups’ efforts to make LPFM noncommercial and local.



Suddenly, this year, people are excited about LPFM again.  You see, after LPFM licensing began over a decade ago, the FCC and activists quickly saw that their vision of thousands of new stations wasn’t realistic.  In light of the disappointing launch, the FCC lobbied Congress for years to expand more LPFM licensing opportunities.  In response, Congress passed the Local Community Radio Act in 2011, which only marginally expanded opportunities for LPFM licensees.  I wish new LPFM applicants the best, but I don’t think there’s any reason to be excited.



First, the impact of the 2011 law is minimal and shows the futility of the FCC playing catch up to the marketplace.  The process to approve more LPFM stations took years.  In the meantime, listeners have several platforms for instant music access, including Pandora and Spotify streaming, iTunes, Sirius-XM radio, and cloud computing music storage.  And with the increasing popularity of smartphones, it has never been easier to have personalized, portable music selections.



Still, the FCC and the Congress spent years only nibbling at the edges of the matured broadcast radio market.  The modest change in the 2011 law, recently implemented, might enable dozens or perhaps a few hundred more stations.  But when 100,000 LPFM stations is the approximate ceiling, it’s clear how little things have really changed.  The activists will blame Big Radio for limiting LPFM in Big Radio’s markets, but the blame belongs equally to the noncommercial mandates.



LPFM stations, with liberalized rules that allow them to accept commercial sponsors and band together, could provide a dynamic alternative to the current music and radio broadcast landscape.  After a decade of filings, notices, and rule changes, the FCC and the activists will have LPFM is right back where it started—small, isolated, and rare.




 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2012 16:13

October 31, 2012

Event: Will the UN Take Over the Internet? Previewing the World Conference on International Telecommunication

As you know, Eli Dourado and I have been keeping close tabs on the World Conference on Information Technology (WCIT), which will take place this December in Dubai. We started WCITLeaks.org in June to help bring transparency to the treaty negotiations because U.S. officials and others had warned that some member states, including Russia and China, had put forth proposals to regulate the Internet, and those proposals where not available to the public. Since then, Eli has joined the State Department’s International Telecommunication Advisory Committee and he will be part of the U.S. delegation to the Conference.



Now we’d like to invite you for a final briefing about the conference, looking at what’s at stake and what to expect, on Wednesday, November 14, at 2 p.m. at the W Hotel in Downtown D.C. We will have a keynote address by Ambassador Terry Kramer, head of the U.S. delegation to the conference, as well as a panel discussion including Gary Fowlie from the United Nations, Paul Brigner from the Internet Society, and internet governance expert Milton Mueller.



We hope you will join us for what will no doubt be a lively discussion! Below is the full event information. Please RSVP today.







Please join the Mercatus Center for a panel discussion on the upcoming World Conference on International Telecommunication (WCIT). Once in a generation, governments from around the world gather to revise the International Telecommunication Regulations, a UN-sponsored treaty that governs international telecom practices. This year’s meeting is especially important because it is the first one since widespread adoption of technologies such as the Internet and mobile phones.



In the U.S. there has been widespread concern that the UN may want to exert greater control over the Internet, and the House recently voted unanimously for a resolution opposing any such move. Additionally, there are concerns that the new regulations could require U.S. web companies to pay to deliver content over the Internet.



The panel will preview these and other issues that are likely to arise at the WCIT meeting that begins in Dubai on December 3. The event will also feature a keynote address by Ambassador Terry D. Kramer, the head of the U.S. delegation to the WCIT.



Panel Discussion:



Eli Dourado, Research Fellow, Mercatus Center at George Mason University & co-creator, WCITLeaks.org



Paul Brigner, Regional Director of the North American Bureau, Internet Society



Milton Mueller, Professor, Syracuse University School of Information Studies & co-founder, Internet Governance Project



Gary Fowlie, Head of the Liaison office of the International Telecommunication Union to the United Nations




 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2012 12:49

America’s Internet Transformation Demands an All-IP Future

If the FCC stops moving forward on Internet transformation, the universal service and intercarrier compensation reform order will become a death warrant for telephone companies.



CLIP hosted an event earlier this month to discuss Internet transformation. What is Internet transformation? In a recent op-ed, FCC Commissioner Ajit Pai noted that it “is really two different things—a technology revolution and a regulatory transition.”



The technology revolution began with the commercialization of the Internet, which enables the delivery of any communications service over any network capable of handling Internet Protocol (IP). According to the National Broadband Plan, the “Internet is transforming the landscape of America more rapidly and more pervasively than earlier infrastructure networks.” In little more than a decade, the Internet destroyed the monopoly structure of the old communications industry from within and replaced it with intermodal competition.



Creative Destruction is the essential fact about capitalism.” Unfortunately, the same cannot be said of regulation. Years after the Internet debunked the 20th Century notion that telephone service is a natural monopoly, the Communications Act soldiers on as if the Internet did not exist. The theory of natural monopoly assumes the market will support only one facilities-based telephone network. The current regulatory scheme is premised on this theory even though the overwhelming majority of consumers today can obtain telephone service from at least six different facilities-based communications companies: the incumbent telephone company, the incumbent cable operator, and four nationwide mobile providers.



Though they provide similar services, telephone, cable, and mobile companies are subject to very different legal requirements. When Congress overhauled the Communications Act in 1996, the commercial Internet was still in its infancy. Different communications services were generally provided by different network architectures: Cable systems provided one-way video programming, cellular networks provided mobile telephony, and the public switched telephone network provided plain old telephone service. Congress assumed this traditional status quo would continue indefinitely and fashioned the law accordingly.



Unfortunately, this assumption was outdated almost as soon as it was made. Millions of American consumers have been cutting the telephone cord in favor of mobile telephony for a decade. Consumers have been able to access high-speed Internet services over multiple, IP-based network architectures for a decade as well, including largely unregulated Wi-Fi hotspots that are routinely available for “free” in most metro areas. Consumers have viewed the “triple play” packages provided by cable and telephone companies over fiber networks as competitive substitutes for years. Their respective status as “cable” and “telephone” companies remains relevant only in outdated statutory definitions that now serve primarily as a source of rent seeking for opportunists who are generally unwilling to invest in their own networks.



Companies that benefit from these rent-seeking opportunities have long claimed that policies favoring certain companies at the expense of others promote “competition.” The DC Circuit rejected this reasoning in 2004, when it remanded FCC broadband rules applicable only to telephone companies because the FCC had failed to consider the importance of intermodal broadband competition from cable providers.



In its 2011 order reforming outdated intercarrier compensation policies and establishing the Connect America Fund (the “CAFIC Order”), the FCC finally conceded that “leveling the playing field” promotes competition by allowing consumers to more accurately compare service offerings from telephone companies, cable companies, and wireless providers. The FCC recognized that its legacy policies were “designed for an era of separate long-distance companies” and “established long before competition emerged among telephone companies, cable companies, and wireless providers.” It also recognized that the implicit subsidies provided by intercarrier compensation are “a deterrent to deployment of all IP networks” and “unfair for consumers.” The FCC decided to “promote innovation by eliminating barriers to the transformation of today’s telephone networks into the all-IP broadband networks of the future.” It began this transition by phasing out intercarrier compensation.



The CAFIC Order took a significant step toward fulfilling the vision of Internet transformation, but it is only the first step. The FCC left the details of its implementation to future proceedings and has yet to address many critical regulatory transition issues at all. Revising a regulatory framework developed over nearly a century is no easy task, and the FCC should be commended for committing to move forward. Now that journey has begun, however, the FCC must keep moving.



Internet transformation will leave no company untouched. It is affecting laws and regulations governing every aspect of communications policy and broader issues as well, including privacy, copyright, and free speech, all of which should considered during the regulatory transition. Ironically, however, telephone companies are among the most vulnerable to delay. If the FCC stops moving forward on Internet transformation, the CAFIC Order will become a death warrant for telephone companies.



The CAFIC order encourages Internet transformation by eliminating the largest source of support for the switched telephone network – intercarrier compensation. Although removing implied subsidies for switched telephone service is a necessary step in promoting the deployment of all-IP networks, it is not sufficient by itself: The FCC must also eliminate regulations requiring telephone companies to continue offering 1930s-1960s era technology in the form of switched telephone services. A telephone company that deploys an all-IP infrastructure cannot capture the increased efficiencies produced by its investment in modern infrastructure if it is required to continue supporting a duplicative (and inefficient) switched telephone network. Without additional reform, the CAFIC Order will result in a significant reduction in revenue for telephone companies without a corresponding reduction in their cost of providing service. That result is unsustainable in a communications market that is increasingly demanding broadband services.



If telephone companies must continue to maintain outdated and inefficient switched networks while their unregulated broadband competitors reap the benefits of modern network technologies, telephone companies will continue to lack incentives to invest in all-IP networks. In the long run, they would be unable to compete with cable operators and other providers of IP-based services who are not required to maintain inefficient, duplicative networks. Faced with that future, telephone companies would have every incentive to invest their capital elsewhere, which would reduce opportunities for additional consumer choice in the wired broadband segment.



If that happens, consumers will lose a potential broadband competitor and the public interest will suffer. Consumers are already feeling the frustration of a regulatory system that has not kept up with the pace of Internet transformation in the market for communications services. I told a true story at the CLIP event about a former employee of mine who “cut the cord” so long ago he didn’t know he was required to dial a “1” before making a long distance call on a wireline telephone. He did not know there was a difference between “local” and “long distance” calls in FCC regulation. From his perspective, the requirement to dial a “1” was arbitrary and confusing.



The FCC knows the United States must seize the opportunity for Internet transformation through regulatory reform or “we will fall behind those countries that do.” The communications industry knows it. Consumers know it too. They know Internet transformation will be disruptive, but they also know that disruption is the constant companion of innovation – and innovation is the key to our global competitiveness.



I expect it is the inevitability of disruption that prompted the so-called Broadband Coalition to claim that Internet transformation is a “lie.” If you want to know the “truth” about Internet transformation, you don’t have to take my word for it.




You can read the National Broadband Plan or the FCC’s Connect America Fund order, which describes the nation’s goal as building “the all-IP broadband networks of the future.”
You can read this announcement by T-Mobile describing its all-IP backhaul strategy as the key to a competitive 4G experience. According to T-Mobile, “A 4G network without appropriately dimensioned backhaul is like building a mile of six-lane highway (the radio network) that converges into a one lane dirt road (Time-division multiplexing (TDM) circuits).” T-Mobile says it has completed backhaul upgrades (95% of which are fiber) throughout its 4G network, which gives it a competitive advantage over those who “continue to use [TDM-based] T1s at cell sites for backhaul, which provide a slower connection to the Internet.”
You can read this announcement from Sprint Nextel stating that its backhaul network upgrade from T1s to Ethernet will increase Sprint’s bandwidth by 20 times at each cell site and reduce its cost per bit by 95 percent.


Sprint Nextel and T-Mobile compete in the lightly regulated mobile wireless segment in which market-based competition drives network deployment and innovation. Their announcements indicate the market has already spoken: The future of mobile wireless broadband is all-IP infrastructure. Shouldn’t consumers of wired broadband services have an opportunity to enjoy that future too?



It is the future consumers want. It’s the future that innovation entrepreneurs in Silicon Valley want too. Broadband applications and devices rely on high-speed IP infrastructure to reach consumer markets. The longer we wait to upgrade our networks, the more uncertainty there will be for entrepreneurs eager to create new IP-based products and services and distribute them to a wider audience. Even Hollywood can agree with Silicon Valley on this issue. Content producers of all kinds benefit from the ability of all-IP networks to support the distribution of video and other high-bandwidth services online.



The Broadband Coalition, which says it represents America’s “innovative” broadband providers, claims it “doesn’t matter” whether packets are “organized” using TDM (used by T1 lines) or IP technology. I suspect engineers at the FCC, T-Mobile, and Sprint would be surprised to hear that transitioning to IP “doesn’t matter.” Does the Broadband Coalition intend to contact the Federal Trade Commission and alert them to the “false” claims of T-Mobile and Sprint about the significantly faster speeds and lower costs of their IP backhaul networks? Not when the technical and economic advantages of all-IP networks tell the true story so plainly.



I did some online research to see whether other countries believe IP networks matter. China recently focused its “Broadband China” initiative on the deployment of all-IP networks. This initiative aims to provide high-speed broadband connections to more than 250 million households in urban and rural locations in China by 2015, and is expected to connect 35 million new families to the Internet with fiber-to-the-home networks by the end of 2012. I did not see any announcements regarding new “TDM-to-the-home” network deployments, and I don’t expect we’ll be hearing about new deployments of this 1960s era technology to consumer neighborhoods anytime soon. The global consensus appears to favor all-IP networks for the foreseeable future.



If someone is being disingenuous about Internet transformation, it’s the Broadband Coalition. It says continued FCC enforcement of the current regulatory framework will enable “innovative” technology providers to offer lower prices and “technology breakthroughs.” The FCC has been following that prescription for nearly two decades, yet it concluded in its most recent broadband progress report that “broadband is not yet being deployed ‘to all Americans’ in a reasonable and timely fashion.” Americans cannot afford to wait another 18 years to see whether an outdated regulatory regime adopted in 1996 will eventually work. Delaying regulatory reform might shield some companies from competitive disruption in the short-term, but domestic regulatory policies cannot stop Internet transformation from occurring internationally.



It is past time the United States adopted a regulatory framework designed for today’s communications markets – one that serves the needs of consumers and enhances our global competitiveness. At the CLIP event on Internet transformation, Commissioner Pai recommended that the FCC create an IP Transition Task Force to “track down and remove all the tariffs, the arcane cost studies, and the hidden subsidies that distort competition for the benefit of companies, not consumers,” while “preserv[ing] the vital consumer protections that are still likely to be needed in an all-IP world.” He believes a comprehensive approach to regulatory transition is required to meet “the great infrastructure challenge of the early 21st century” – high-speed Internet access for all Americans.



The Broadband Coalition recommends that we adopt the Homer Simpson approach to this challenge: “hide under some coats, and hope that somehow everything will work out.” That approach would be no better for the United States than it was for Homer Simpson. As the National Broadband Plan recognized, “the choice is not whether the transformation will continue. It will.” The choice is whether we, as a nation, will rise to meet the challenge.




 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2012 12:33

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.