Adam Thierer's Blog, page 59

August 7, 2013

Is there really a cable/mobile duopoly in America?

This is the third of a series of three blog posts about broadband in America in response to Susan Crawford’s book Captive Audience and her recent blog post responding to positive assessments of America’s broadband marketplace in the New York Times. Read the first and second blog.



If Crawford’s mind, this is a battle between the oppressor and the oppressed:  Big cable and big mobile vs. consumers. Consumers can’t switch from cable because there are no adequate substitutes. Worst of all, she claims, the poor are hardest hit because they have “only” the choice of mobile.



Before we go deeper into these arguments, we should take a look back.  It was not long ago that we didn’t have broadband or mobile phones.  In less than two decades, our society and economy have been transformed by the internet, and we have evolved so quickly that we can now discuss which kind of network we should have, how fast it is, which kind of device to use, and even how the traffic should be managed on that network. The fact that we have this discussion shows the enormous progress we’ve made in a short time. Plus we can discuss it on a blogging platform, yet another innovation enabled the internet.



Defining Competition:  Economists vs. Lawyers



Economics and lawyers differ on how they define competition.  Economists define a competitive market which has many firms, homogeneous products, free entry and exit from the market, independence of decisions among firms, and complete information.  They define an oligopoly by the amount it differs from these factors.  Lawyers, on the other hand, have in mind a standard of evaluation and to what extent firms serves the public interest.  The legal definition is necessarily subjective because it takes into account a lawyer’s value judgments.



Even a cursory look at the American broadband market shows that it is complex.  The website of the National Broadband Plan has a wealth of data about broadband in the USA, including breakdowns of providers by zip code.  As of December 2012 there are 2,083 broadband providers, of which 1,618 offer basic broadband speeds of 3 Mbps; 1,018 offer broadband speeds of 6 Mbps, and 200 offer 100-megabit connections.[1] Check out the report of the number of providers by speed tiers. Thus the idea of a duopoly is hard to prove by the numbers. Further it is hard to call broadband a homogeneous product as it is delivered on at least 5 different network technologies, appears in many tiers, and is packaged in a variety of ways based on the user.



Whether it is difficult to enter or exit the broadband market may depend on the municipality, and to be sure, broadband providers are regulated. As for information about broadband providers, it gets better all the time. Not only is the National Broadband Map a useful tool, there are a many websites about pricing and consumer reviews. Needless to say, consumers have many outlets for information on broadband from the providers themselves.



Crawford is one legal scholar who believes the duopoly thesis, but others don’t.  University of Pennsylvania Law School professor Christopher Yoo observed, “There has never been more competition in the cable industry than today” on Jerry Brito’s podcast. His book The Dynamic Internet:  How Technology, Users and Businesses Are Transforming the Network makes a compelling case about the complex internet ecosystem and how a myriad of actors create the market.  No one company or industry emerges dominant.



Intermodal Competition



Crawford doesn’t believe there is competition between different types of networks, but the Organization of Economic Cooperation and Development (OECD) ranks the US as #3 in world for intermodal competition.  Even though the US is is a member of the OECD, it would be a stretch to call it a US-centric body.  This group coordinates the G-20, is based in Paris, and has as its primary activities data collection and analysis.  The notion of intermodal competition means that a consumer has a variety of networks to choose from:  DSL, cable, mobile, satellite, and Wi-Fi.



The idea of cable/mobile duopoly would mean that there are only two networks, each with just two firms; Comcast and TimeWarner for cable, and AT&T and Verizon for mobile.   It’s difficult for me to swallow this notion because when I visit or live in the US these are not my providers, and further, I know many people in different parts of the US who have other providers.



Personally I use a 4G mobile dongle as my broadband connection, and it accommodates fast downloading and uploading of video. In spite of ample wire line infrastructure in Denmark, already 7% of the population uses mobile as its primary source of broadband. In fact, I have never in my life subscribed to cable; there are just too many books to read. Russ Roberts of the Mercatus Center noted that he does not subscribe to cable because he and his three sons would spend the entire day watching sports.[2]  Many professionals I know don’t have the time to watch long format video, and for them, DSL does the job.  However I know plenty of people who love cable.   Furthermore I know literate, gainfully employed people who don’t care to spend their life on the internet, or if so, only sparingly.  If anything, my sense is that many people are overwhelmed by the choices of broadband networks.



Another wrinkle in the duopoly thesis is that satellite broadband  is available to 99% of Americans. This is important technology for much of the country is mountainous and not well populated.  Crawford scoffs at satellite broadband because it is “generally considered unsuitable for 21st century uses”, but it’s perhaps  because the only information she cites is eHow.com. For a more thorough discussion, see customer 24 reviews of ViaSat’s Exede  on DSLReports and reviews of 5 other satellite broadband companies.



Satellite broadband packages of 5-15 Mbps download start at $40/month[3]. There is a fee for equipment (for example a $10/month), or the equipment can be bought outright.  Satellite broadband is more than adequate for web browsing and email, the essential applications job hunting or health information.  Granted, it can’t be used to play video games and is probably not the best choice for VoIP applications such as Skype, but people use satellite broadband to watch Netflix (though satellite does reach data cap limits faster than wire line options).  See the demonstration of how fast a 20/6 Mbps satellite broadband connection loads in comparison to a fiber network.



There are power users of broadband for whom satellite is not the right choice.  They play massive multiplayer online games.  They run YouTube 24/7. They are active in peer to peer file sharing.  They have a set of needs, but their needs are not the same as grandmothers who use an iPad to play bridge, send email and check pictures on Facebook.  The market should be able to respond to different needs—and price points–without imposing one standard on everyone.



Many like to go about evaluating competition but counting the number of players in the industry. But if you live in the age of the automobile, it matters little to you that are are 100 horse & buggy companies.  Indeed a competition specialists might assert that if antitrust rules are written well enough, there is little need for industrial regulation. This is essentially the criticism of the European market where DSL makes up 75% of broadband connections.  This is the outcome of the 28 national telecom regulators counting the number of entrants that get to use the incumbent’s copper wires.  If you can get free ride on infrastructure, there is no need invest in something different.  Cable makes up just 15% of broadband connections in the EU, according to EU Cable, a trade association. The US has a more balance between DSL and cable technologies, enabling them to compete with one another.[4]



Overall, Europeans regret losing first place in mobile as the USA has taken a quantum leap in LTE. Thus many Europeans are pushing for a digital single market so that their homegrown web companies might better compete against Google, Facebook, Amazon, LinkedIn and the other American broadband-based companies that dominate the European landscape.  It’s for this reason the EU Vice-President wants to allow European cable and telco companies to merge, so they don’t have the inefficiencies of operating with the individual rules of 27 different countries.



Wired vs. Wireless



Crawford insists that every American should have two broadband subscriptions, one wired and one wireless.  She fails to realize that many of us don’t want or need both of these.  The Progessive Policy Institute published a report by Clinton Administration economist Everret Ehrlich called Shaping the Digital Age: A Progressive Broadband Agenda which notes  “Thus, while activists claim that only a high-speed, wireline connection will suffice, consumers are moving in an entirely different direction, toward wireless.  They are driven by their own needs and preferences, whether it is because they rent or move, because they prefer mobility and convenience, because they can accomplish whatever tasks they want to do on a mobile system, or for other reasons. Demanding that they have access to a wireline system in the name of ‘competitiveness’ is a waste of resources and an elitist substitution of planners’ preferences for a competitive market.”



Indeed many can do what they need to do on the web with mobile alone, and for them a wireline subscription is needless money spent on amusement.  Over sixty percent of America’s wireline broadband usage goes to entertainment.[5]   Yes, American music and movies are wonderful, but there are only so many hours in a day.  As my parents said to me about declining to upgrade their cable subscription, “We have enough movies.  We would rather play with the grandchildren.” People have different needs which are matched by different networks at different prices.  It’s not my place to tell people what kind of broadband they should have, nor is it Crawford’s.



Finance in the Cable/Telco Industry 



Whether we like it or not, financial markets are a part of the cable/telecom industry. They provide capital for infrastructure, and investors rely on capital gains to fuel their retirement funds. Crawford portrays cable and telecom companies as greedy, but  Valueline’s 2013 reports puts telecom services in the lower bottom half of all global industries for return on capital, 13.66%, placing it just above the publishing industry.



Crawford decries that the cable industry invested some 30% of revenues in 2001, but just 12-14% in recent years.  However this can be explained simply by the fact that 2001 marked a major investment in the DOCSIS innovation. That year CAPEX was triple the level of 1998. Shifting from analog to digital TV was a game changer for cable. Naturally after a big shift, these numbers will decline in following years.  However cable CAPEX jumped up again in 2006 and has stayed relatively stable since.  In general, the cable/telco industry invests 13% of sales in innovation, a higher percentage than other equipment industries.



Crawford’s claim about higher cable ARPU (average revenue per user) can be explained by the fact that cable providers now offer broadband and telephony in addition to pay TV. To focus only on revenue and not profit margin or capital expenditure does not tell the whole story.  From 1999 to 2009, Comcast’s return on capital tripled, but that amount was just 7%. Yes, a big company will have more revenue and larger dividends, but it will also have more costs.



Crawford further charges that between 2002 and 2012, AT&T’s dividend increased by 64%, while Verizon’s grew by 47%.  Again the answer is simple if you look at the history.  Between 2000-2002 the American carriers divested their assets in Europe and other regions. At that time, there were a different mix of companies (Bell South, Ameritech etc) which later merged into AT&T. Verizon had a similar M&A evolution.  When a sale of assets occurs in a publicly traded company, any profits will be returned to shareholders. Crawford implies that this money was derived from overcharging American customers, but it was not.  This just reflects corporate finance activities.



Crawford further singles out AT&T and Verizon for neglecting their wires and focusing on the more profitable wireless business. I doubt this because there are too many stakeholders holding AT&T’s feet to the fire, from the FCC to investors to savvy consumers on social media. AT&T closed 2012 with a net profit margin of 5.7%, hardly the stuff of a swindler.  Naturally this number will fluctuate; in 2011 it was 3.11% and in 2010 it was 15.98 percent. The point is that a telco will have many business lines, but it has only one stock that trades on the exchange.   Thus it has an incentive to manage all its business lines well to maximize its profit and share price.



Though there are various metrics to consider, it’s hard to make a case that telecom companies are fleecing their customers when one looks at the profit margins and the fact that investors have many other choices for industries which have higher returns such as software, aerospace, chemicals, pharmaceuticals and so on.  Perhaps most telling is the fact that internet companies whose businesses are built on top of broadband infrastructure (Google, Facebook, Netflix etc) are generally more profitable than the network providers themselves.  Not only do carriers effectively subsidize the leading internet companies with broadband infrastructure and data delivery, they also subsidize equipment such as handsets, modems, set top boxes, satellite dishes and so on.



The Internet Ecosystem



Broadband is not just about networks.  The complex internet value chain includes equipment providers, software providers, device manufacturers, content and application providers, and users.  Carriers are not the only actors, and their decisions are impacted by the participants around them.  We cannot overstate the role of content/application providers and device manufacturers.  In essence, these are the reason people get on the internet in the first place.  Therefore these groups have the ability to drive major economic change and innovation with their offerings.



While it can make for a good yarn that there is a some cable/mobile duopoly, the reality is that the internet value chain is too complex for any one or two players to exert extraordinary control.  To focus on one or two actors in the value chain is a static and monolithic analysis.  It does not allow for inevitable change and evolution which happens quickly with the internet.  I suppose Crawford would have sacrificed some of her book’s sensational appeal by allowing for greys instead of black and white.  The novelist has this license, not the social scientist.



The other economic force in the broadband marketplace is over the top competition, the services on the internet itself that compete with the carriers, such as Skype, WhatsApp and Netflix.  Skype managed to disrupt the global market for long distance.  WhatsApp caused SMS revenue to plummet 40% for some carriers.  Even Google’s Chromecast, a dongle which enables streaming YouTube and Netflix from Google to a digital TV, offers consumers a cable-free existence for only for $35. These competitive forces change the economics for networks and render the duopoly thesis even less valid. It’s not the number of players that creates competition, but the technological development.



These blog posts have reviewed America’s broadband in relation to the rest of the world, competing broadband technologies for the future, and whether certain firms exert extraordinary influence on the broadband marketplace.  With data from the OECD, FCC, Akamai  and my university, interviews with Americans, and my personal experience living in a variety of countries, I conclude that the American broadband market is competitive and robust.  Living abroad one comes to appreciate all the good things about America; broadband and the vast economy it enables are two of them.  If this is what a legal scholar considers duopoly, then I would like some more.



The final part of this series investigates a digital literacy program and how it can help those without an internet connection get online. It also addresses to what extent cost is a barrier to internet access.









[1] http://www.ntia.doc.gov/print/blog/20...




[2] http://www.econtalk.org/archives/2012/09/paul_tough_on_h.html




[3] http://www.hughesnet.com/index.cfm?page=Plans-Pricing#gen4s




[4] http://www.leichtmanresearch.com/pres...




[5] See Sandvine’s Global Internet Phenomena Report




/p




 •  0 comments  •  flag
Share on Twitter
Published on August 07, 2013 00:23

August 6, 2013

Planning for Hypothetical Horribles in Tech Policy Debates

do not panicIn a recent essay here “On the Line between Technology Ethics vs. Technology Policy,” I made the argument that “We cannot possibly plan for all the ‘bad butterfly-effects’ that might occur, and attempts to do so will result in significant sacrifices in terms of social and economic liberty.” It was a response to a problem I see at work in many tech policy debates today: With increasing regularity, scholars, activists, and policymakers are conjuring up a seemingly endless parade of horribles that will befall humanity unless “steps are taken” to preemptive head-off all the hypothetical harms they can imagine. (This week’s latest examples involve the two hottest technopanic topics du jour: the Internet of Things and commercial delivery drones. Fear and loathing, and plenty of “threat inflation,” are on vivid display.)



I’ve written about this phenomenon at even greater length in my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in two lengthy blog posts asking the questions, “Who Really Believes in ‘Permissionless Innovation’?” and “What Does It Mean to ‘Have a Conversation’ about a New Technology?” The key point I try to get across in those essays is that letting such “precautionary principle” thinking guide policy poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity. If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown; hypothetical worst-case scenarios trump all other considerations. Social learning and economic opportunities become far less likely under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.



Indeed, if we live in constant fear of the future and become paralyzed by every boogeyman scenario that our creative little heads can conjure up, then we’re bound to end up looking as silly as this classic 2005 parody from The Onion,Everything That Can Go Wrong Listed.” It joked that “A worldwide consortium of scientists, mathematicians, and philosophers is nearing the completion of the ambitious, decade-long project of cataloging everything that can go wrong.” The goal of the project was to create a “catalog of every possible unfortunate scenario” such that, “every hazardous possibility will be known to man.” Here was the hilarious fake snippet of the imaginary page 55,623 of the project:



snippet of Onion list of everything that can go wrong



I loved the story’s concluding quote from obviously fake Popular Science writer Brian Dyce, who said:



“Within a decade, laypeople might be able to log onto the Internet or go to their public library and consult volumes listing the myriad things that could go wrong,” Dyce said. “It could prove a very valuable research tool or preventative stopgap. For example, if you’re shopping for a car, you can prepare yourself by boning up on the 98,627 bad things that could happen during the purchasing process. This project could have deep repercussions on the way people make decisions, and also the amount of time they spend locked in their bedrooms.”


So, in the spirit of keeping people locked in their bedrooms, cowering in fear of hypothetical horribles, I have started a list of things we must all live in fear of and plan for! (I actually pulled most of these from articles and essays in my Evernote files that I tagged with the words “fear,” “panic.” and “dread.” I have collected more things than I can count.)  Anyway, please feel free to add your own suggestions down below in the comments.




Without beefed-up cybersecurity regulations, we’ll face an “electronic Pearl Harbor.”
Without pervasive NSA & law enforcement snooping, we face “the next 9/11.”
An unfiltered Internet experience will lead the next generation to become nymphomaniacs and sex-starved freaks.
Social networking sites are a “predators’ playground” where sex perverts prey on children.
Twitter and texting will lead to the end of reading and/or long-form writing.
Personalized digital services will lead to an online echo-chamber (“filter bubbles”) and potentially even the death of deliberative democracy.
Robots are going to take all our jobs and then turn us into their slaves.
3D printing will destroy manufacturing jobs and innovation.
Strong crypto will just let the bad guys hide their secrets and nefarious plots from us.
Bitcoin will just lead to every teenager buying illegal drugs online.
Hackers will hijack my car’s electronic systems and force it to drive off a bridge with me inside.
Hackers are just going to remotely hack all those new medical devices I might use and give me a heart attack or aneurism.
Hackers are just going to remotely hack my home and all its “smart devices” and then shut down all my stuff or spy on me.
Geolocation technology is only going to empower perverts and stalkers to harass women.
Targeted online ads just brainwash us into buying things we don’t need and will lead to massive discrimination.
Big Data and the “quantified self” movement are just going to lead to massive social and economic discrimination.
Violent video games are teaching our kids to be killers and will lead to a massive spike in murders and violent crime.
Facebook is a “monopoly” and “public utility” from which there is no escape if you want to have an online existence.
Google Glass will mean everybody will just take pictures of me naked in the gym locker room.
Wearable technology will lead to a massive peer-to-peer Panopticon.
Commercial drones are going to fall from the sky and kill us (if they don’t zap us with lasers or death rays first).


Hey, it could all happen, right?!  Therefore, as The Onion proposed, we must “catalog every possible unfortunate scenario” such that “every hazardous possibility will be known to man” and then plan, plan, PLAN, P-L-A-N accordingly!



Alternatively, we could realize that, again and again, humans have shown the remarkable ability to gradually adapt to new technologies and assimilate them into their lives through trial-and-error experimentation, the evolution of norms, and the development of coping mechanisms. It’s called resiliency. It happens. We live, we learn, we move on.




 •  0 comments  •  flag
Share on Twitter
Published on August 06, 2013 13:10

Is fiber to the home (FTTH) the network of the future, or are there competing technologies?

This is the second of a series of three blog posts about broadband in America in response to Susan Crawford’s book Captive Audience and her recent blog post responding to positive assessments of America’s broadband marketplace in the New York Times. Read the first post here. This post addresses Crawford’s claim that every American needs fiber, regardless of the cost and that government should manage the rollout.



It is important to point out that fiber is extant in almost all broadband technologies and has been for years.  Not only are backbones built with fiber, but there is fiber to the mobile base station and fiber in cable and DSL networks.  In fact American carriers are already some of world’s biggest buyers of fiber.  They made the largest heretofore purchase in 2011, some 18 million miles of fiber optic cable.  In the last few years American firms bought more fiber optic cable than all of Europe combined.[1]



The debate is about a broadband technology called fiber to the home (FTTH).  The question is whether and how to pay for fiber from the existing infrastructure—from  the curb into the house itself as it were.  Typically the it’s the last part of the journey that can be expensive given the need to secure rights of way, eminent domain, labor cost, trenching, indoor wiring and repair costs.  Subscribers should have a say in whether the cost and disruption are warranted by the price and performance.  There is also a question of whether the technology is so essential and proven that the government should pay for it outright, or mandate that carriers provide it.



Fiber in the corporate setting is a different discussion. Many companies use private, fiber networks.  The fact of that a company or large office building offers a concentration of many subscribers paying higher fees has helped fiber grow in as the enterprise broadband choice for many companies.  Households don’t have the same economics.



There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics.  After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.



Broadband Subscriptions in the OCED



By way of background, the OECD Broadband Portal[2] report from December 2012 notes that the US has 90 million fixed  (wired) connections, more than a quarter of the total (327 million) for 34 nations in the study.  On the mobile side, Americans have three times as many mobile broadband subscriptions as fixed.  The 280 million mobile broadband subscriptions held by Americans account for 35% of the total 780 million mobile subscriptions in the OECD. These are smartphones and devices which Americans use to the connect to the internet.





Ordinary feature phones are additional and not include in this number. This report notes that FTTH accounts for 7.36% of America’s total fixed broadband subscriptions, about 6.6 million subscriptions.  The US falls in the middle of the distribution of fiber penetration in the OCED.  The average penetration is 14.88%, but when one removes Japan and South Korea which have over 60% fiber, the average falls to 8.63%. Germany, an advanced industrial nation, has less than 1% fiber penetration.  Israel has zero.



It is also important to note that the Netherlands and Belgium have less fiber penetration than the US.  These two nations are considered #1 and #2 by the OECD for intermodal competition, as more than 90% of homes have a high speed DSL and cable connection. The US measures #3 because of the diversity of network types: cable, DSL, mobile, fiber, and satellite.



Advocates of any particular broadband technology often like to make arguments that broadband will increase economic growth and that nations can compete on broadband alone.  The reality is more complex, and broadband is only a single input to a complex economy, like the level of literacy. Each country has a particular set of industries and policies, and their effectiveness in applying broadband can vary for many reasons.  Therefore the OECD reports only .64 correlation between broadband growth and GDP per capita, a mild correlation.



For this reason we should pause before investing more in FTTH, the most expensive broadband technology. See the following OECD chart. Only Switzerland, Norway and Luxembourg have higher GDP per capita than the US, and of these countries, only Norway has a higher fiber penetration than the US. More telling is that Japan and South Korea with their high fiber penetrations have a GDP that is a third less than the US. See the graph from the OECD report 1k. Broadband penetration and GDP (Dec. 2012).



Broadband and GDP



Bandwidth Requirements of Web Applications



My institute the Center for Communication Media and Information Studies at Aalborg University in Copenhagen made a report about broadband needs in 2020.  It includes some scenarios about a family of four, providing extreme and a normal usage. In the extreme example, each family member is  in the midst of a bandwidth-heavy activity. Mom is a on a video conference, daughter is watching HDTV, and son is playing a video game.  The bandwidth needs for this scenario are 40-130 Mbps download and 10 Mbps upload. For the “normal” scenario the recommendation is 30-70 Mbps download and 10 Mbps upload.



While these scenarios are interesting, they fall well under the 1000 Mbps (1 gigabit) threshold that FTTH offers.  They require that the family upgrade to some serious hardware including devices that can properly render HDTV and 3DTV. Moore’s Law has helped the price of hardware decline tremendously but such a television costs a few hundred, if not a few thousand, dollars.  Furthermore the scenarios are less applicable to the fastest growing household segment in the US, the single person living alone, to whom a Wi-Fi network in a public place may be an additional appreciated location for broadband activities, rather than only at home.



The single largest source of traffic on American wire line networks today is Netflix[3]. The company has some 29 million subscribers[4] in the US and appears in roughly every third American home. Crawford’s book provides an example of the performance reports that Netflix publishes of how well its service runs on different networks, noting that 2.5 Mbps is sufficient for a high quality experience. Additionally Netflix is constantly making its service more efficient, and it has developed its own content delivery network to cache and speed content to its users. As for the leading websites, Google, Facebook, YouTube and Amazon; they want to have as wide exposure as possible, so they are not necessarily trying to make their applications more bandwidth intensive. Even YouTube, which takes up a disproportionate share of network traffic, continues to make its platform leaner.



Bandwidth needs for education



Crawford asserts that without FTTH we will not be able to take advantage of important applications in education and health. Let us review some of the leading modalities for online education and their bandwidth requirements.   The most bandwidth intensive modality is massive open online courses (MOOCs). These have been available on existing networks for years from many of America’s leading universities as well as some startup ventures.  Many enjoy MOOCs for its ability learn on a wide range of subjects.  Some education experts, however, find MOOCs less than ideal. They see MOOCs as an extrapolation of a large classroom without individualized attention and note that it works well for some kinds of learners not others.



The adaptive electronic textbook may be a format better suited to student’s needs. It is an ebook with interactive features as well as content that adjusts based upon the student’s level.  As textbooks can be downloaded or offered in chapters, they need not be high-bandwidth applications.  As for other modalities such as games, online social learning, tablets and independent certification, there is nothing inherent that requires they have FTTH. It depends on design parameters, and all of these modalities are alive on today’s networks.



The extent to which students use video and in what framework is an important question.  The flipped classroom model is one in which students watch lectures on their own (MOOC) and do homework during class. The student and teacher may meet in a video conference, but they may opt for mobile or VoIP as well.  Skype suggests 1.5 Mbps down/up for high definition video one to tone calls and 4/512 Mbps down/up for high definition video calls for 5 people.  Again, this requirement is well within the capacity of today’s networks.



The promise about online education is about more than a pipe. The point is not just to send canned high definition videos across the wires, but rather to provide intelligent customization to each student. The greater part of the value and engineering need is  upstream in the algorithms, less in the network delivery itself.  There is nothing inherent in online education that requires FTTH.  Indeed if the job is to educate millions, having light, low-bandwidth applications improves the efficacy of the business model.



Bandwidth needs for health



The Norwegian Centre for Integrated Care and Telemedicine, the world’s oldest and leading institute for telemedicine, notes that most applications run fine on average broadband levels (for example, video consultation), and even the most advanced app would require no more than 10 Mbps[5]. Indeed the limiting factor for telemedicine is not broadband deployment but rather health care providers who are resistant to change.  The other requirements for telemedicine are mobile networks and devices, so investing exclusively in wire line networks is not necessarily an enabler for telemedicine.



Bandwidth needs for entertainment



While education or telemedicine may not require large amounts of bandwidth, ever increasing high definition entertainment could consume much bandwidth.  Games and movies on HDTV and 3DTV are the killer apps for FTTH.  Consider that 60% of traffic on American networks is entertainment.[6]  To be sure FTTH can facilitate rich entertainment experiences.  However I can’t find good arguments for why taxes should subsidize FTTH if the key use is entertainment.  Furthermore it is not clear how to avoid the unintended consequence of subsidizing piracy by subsidizing FTTH.  While online video platforms such as Netflix have a powerful effect to lessen piracy—people don’t trouble to pirate movies if they get get them at a good price—for the most hard core pirates, bandwidth is a boon to their activity.



Innovation and Mobile Broadband



In spite of the assertions that FTTH is essential for future applications of education and telemedicine, the greater part of experimentation and implementation is on mobile networks.  This is not just in the USA, but around the world.  For example, I study with a dozen PhD students from Ghana.  They are engaged in knowledge transfer from Denmark to Ghana in some of the most exciting applications of mobile technologies from intelligent transportation systems, education, social networking security, banking and so on.  Additionally some Indian colleagues are working on low-bandwidth video conferencing .



The world is being remade for mobile faster than we can adjust to it. All of the major websites and applications we use today have mobile versions, and those continue to improve with better usability and more modest bandwidth requirements.  That process, along with declining prices for mobile devices, is narrowing the digital divide. Even internet companies such as Yahoo! are remaking themselves to be mobile first.  Application developers have mobile on his mind when designing for the web. We can see that Google excelled on advertising for using its search engine with a pc. They reformulated that model to mobile.  



Cost of Deployment



Crawford notes that America doesn’t have a plan for fiber and that European and Asian nations are marching ahead.  The fact of the matter is that the EU government does not have a plan for fiber either.  The sources  that Crawford provides are from Europe’s Fiber to the Home Council, a trade association that lobbies the EU government subsidies for fiber.  The EU government has the wisdom to have a technology-neutral policy about broadband. Thankfully this is also the case for the US.



Crawford attempts to shame the US by mentioning the fiber build out in Bulgaria, Moldova, and the Baltics. It is important to understand the history from these former Eastern Bloc countries.  When communism fell, they were two generations behind in telecommunications. Carriers invested heavily in both fiber and mobile networks to help these countries leapfrog to modern era.  The leading broadband based company of this region is Skype in Estonia. While this notable, there is still a brain drain from this region to other parts of Europe and the world where there are better education and job opportunities.  I visited this region in 2012, and it is clear to me that it will take more than FTTH to lift these countries out of the past.



In Denmark in 2005, 14 local utility cooperatives attempted to create their own fiber networks, arguing that there is little difference between bringing fiber or electricity to homes. Their business case never worked because the price of broadband on other networks plummeted. Today, fewer than 240,000 Danish homes subscribe to these fiber networks, a number that’s small even for Denmark.  This case demonstrates the danger of considering broadband as a utility akin to electric service when broadband services – and needs – are so diverse.  Norway has a similar story.



Naturally I am keen to see how things fare next door in Sweden where the government has made huge investments in FTTH.  A series of reports from Acrea, a Swedish government owned consulting firm conclude ”It is difficult to estimate the value of FTTH for end users in dollars and some of the effects may show up later”.  They note positive but weak outcomes. However, those results may be even less strong when adjusted for the government’s devaluation of the Swedish currency. As such, in Danes are lucky that no new taxes were levied to pay for broadband, nor were the citizens made to bear the brunt of private investments that didn’t work out.  Nearly 100% of broadband investment is private in Denmark.  Carriers, not the citizens, bear the risks.



The OECD reports that more than 60% of Japan and South Korea’s broadband subscriptions are fiber.  What many overlook about the countries however are the important political, cultural and historical factors that allow them to deploy fiber.  Compared to the USA, these countries have more collective societies and cozy relationships between business and government.  While the zaibatsu and keiretsu systems no longer exist in name, both of these governments want to ensure that their incumbent telecom companies survive, and business plans of 200 years or more are not uncommon.  Thus any national fiber plan is certainly good for the incumbents.



I suspect that most Americans would not be keen about a national FTTH plan that expressly rewarded AT&T, Verizon, Comcast or TimeWarner.  Indeed Americans value the more decentralized nature of their government where communities have more flexibility to determine their broadband needs.



The greater metropolitan areas of Seoul and New York City have roughly the same population, but Seoul is eight times as dense as New York.  This is an important fact, whether the government or a private company is bearing the risk for investing in FTTH. The Japanese improved their case for fiber by using wires above the ground, similar to telephone phone lines of old.  This certainly helped to lower the deployment cost, as well as the fact that most people live in apartments.  Carriers were responsible for the cost of fiber to the building; the landlords are required by law to take it over from there.  Interestingly many Japanese youth are quitting fiber for LTE only broadband plans.[7]



We need not to go to abroad however to evaluate the business case of fiber.  There are important examples in the US. Chattanooga, TN has a municipal fiber project with some controversy. There may be different interpretations on how successful this project is, but the limiting factor in is that not every municipality cannot get a $100 million grant from the Department of Energy.



Plenty has been written about Google Fiber and the various concessions made by the Kansas City government to win the project. Recall as well that the $300 subscriber sign-up fee had to be nixed in order for the project to get off the ground, showing that consumers balked even for a small fee relative to the life of the subscription.  In the case of Provo, UT, for $1 Google took over a municipal fiber network, once $39 million had been sunk in the project.  The network was financed by a $5.35 monthly fee levied on all the households in the town whether they subscribed or not.  Now that Google takes over the network, only subscribers will pay, but if it doesn’t work out for Google, they can sell the network back to Provo for $1. It is interesting to note that Mountain View, CA, where Google is based, declined to make concessions for the company to build a fiber network.



Business model for broadband networks of the future



There is no doubt that FTTH can enable rich video and entertainment experiences, but for the needs of education and telemedicine, these applications don’t require the gigabit speeds that fiber provide.  Even with our knowledge of future scenarios, there are still have many important and unanswered ethnographic questions about how people will use networks.  Future proofing may make sense theoretically, but there is no reliable empirical or mathematical model for it. Many of the companies and governments that invest in FTTH as a future proof strategy found that their models didn’t work out.



The idea to throw the baby out with the bathwater—get rid of all of America’s networks and start over with FTTH is overkill.  Certainly there are ways we can make broadband network deployment more economical such as improving the process with local government.  A case in point is New York City. It can be difficult to get permissions to dig up the streets, and people working in buildings don’t enjoy the walls being ripped out.  To be sure, these disruptions can be streamlined.  Verizon would like to add more fiber, but the conduits are already full of copper wire, and by law Verizon is required to maintain this infrastructure.



Google Fiber in Kansas City proved that lifting restrictions can translate into more investment.  Standardizing the rules for infrastructure rollout so that carriers don’t have to negotiate with each and every town and landlord would go a long way, so would improving the regime for cable franchising.  Another area for reform is spectrum.  And there is no doubt that companies will continue to innovate, whether it is DSL companies transitioning to IP switches, cable companies upgrading to DOCSIS 3.0, or R&D in mobile.  Ericsson, NSN, Alcatel / Lucent, and Qualcomm are just a few of the companies working on 5G standards for mobile, technologies that can download an entire movie in a minute.



The fact of the matter is that many technologies are competing to be the network of the future.  We should encourage this competition. Consumers only benefit from this dynamic interplay. As for the US having a low-average penetration of FTTH, if the argument is nationwide FTTH rollout for economic development, it seems to me to be prudent that the US has not invested more in FTTH, given that the global data does not show that countries necessarily improve their GDP by investing in FTTH.   Maybe that will change in the future, but that is what the data shows today.



People can fall in love with a technology and become blind to its shortcomings.  Thus we need to be careful about these silver bullet solutions, such as FTTH for everyone. There are many things to consider: the speeds of applications, the needs of users, the costs of deployment, and the price of substitutes.  Broadband at any cost is not a worthwhile investment.  If Americans can get access to the bandwidth they want at a fair price, they will care very little what kind of network it is.   The next blog post investigates whether there is a cable/mobile duopoly in broadband.







[1] CRU International Ltd, CRU Monitor: Optical Fibre and Fibre Optic Cable (London, September 2012), http://www.crugroup.com.
[2][2] Please see the reports titled 1c. Total fixed and wireless broadband subscriptions by country (Dec. 2012) and 1l. Percentage of fibre connections in total broadband (Dec. 2012)
[3] See Sandvine’s Global Internet Phenomena Report
[4] http://files.shareholder.com/downloads/NFLX/2622675023x0x678215/a9076739-bc08-421e-8dba-52570f4e489e/Q213%20Investor%20Letter.pdf
[5] Interview with Sture Pettersen, Department Leader for Innovation and Implementation, Norwegian Center for Telemedicine.  February 21, 2013.
[6]http://www.sandvine.com/downloads/doc...
[7] http://gigaom.com/2012/11/21/japanese...



 •  0 comments  •  flag
Share on Twitter
Published on August 06, 2013 07:48

August 5, 2013

Do Europeans and East Asians have better and cheaper broadband than Americans?

I am American earning an industrial PhD in internet economics in Denmark, one of the countries that law professor Susan Crawford praises in her book Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. The crise du jour in America today is broadband, and Susan Crawford is echoed by journalists David Cay Johnston,  David Carr, John Judis and Eduardo Porter and publications such as the New York Times, New Republic, Wired, Bloomberg News, and Huffington Post. It has become fashionable to write that American broadband internet is slow and expensive and that cable and telecom companies are holding back the future—even though the data shows otherwise.  We can count on the ”America is falling behind” genre of business literature to keep us in a state of alert while it ensures a steady stream of book sales and traffic to news websites.



After six months of pro-Crawford coverage, the New York Times finally published two op-eds[1] which offered a counter view to the “America is falling behind in broadband” mantra. Crawford complained about this in Salon.com and posted a 23 page blog on the Roosevelt Institute website to present “the facts”, but she didn’t mention that the New York Times printed two of her op-eds and featured her in two interviews for promotion of her book.   I read Crawford’s book closely as well as her long blog post, including the the references she provides.  I address Crawford’s charges as questions in three blogs.




Do Europeans and East Asians have better and cheaper broadband than Americans?
Is fiber to the home the network of the future, or are there competing technologies?
Is there a cable/mobile duopoly in broadband?


For additional critique of the America is falling behind broadband myth, see my 10 Myths and Realities of Broadband.   See also the response of one of the op-ed authors whom Crawford criticizes.



 



How the broadband myth got started



Crawford’s book quotes a statistic from Akamai in 2009. That year was the nadir of the average measured connection speed for the US, placing it at #22 and falling. Certainly presenting the number at its worse point strengthens Crawford’s case for slow speeds. However, Akamai’s State of the Internet Report is released quarterly, so there should have been no problem for Crawford to include a more recent figure in time for her book’s publication in December 2012. Presently the US ranks #9 for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from 22nd to 9th.





Crawford notes on her blog ”Tussling over contestable rankings is not a good use of our time” and then proceeds to list the rankings of the US from a number of content delivery networks.  She does not explain, however, the implication of this measurement.  Akamai is the world’s largest content delivery network, speeding over one-third of all the content on the web and capturing 1 billion IP addresses per day.  It is the most reliable longitudinal measure, but its methodology should be clarified.



Akamai measures speeds in a similar way to how cars are clocked on a freeway. For example a radar detector can measure the speed of a car at any moment, say 50 mph.  However that car could go 100mph or 25 mph. It’s just what is captured at the moment of measurement.  As for broadband, there may be a 100 Mpbs connection to a person’s home, but if that subscriber only signs up for 5 Mbps, Akamai will only report 5 Mbps.  As a matter of fact the Akamai Q1 2013 report shows Washington DC, Vermont and New Jersey with higher average peak speeds than South Korea, the #1 country. For an in depth discussion of broadband statistics see The Whole Picture: Where America’s Broadband Networks Really Stand from the Information Technology & Innovation Forum.



Incidentally recent reports from both the Federal Communications Commission[2] and the White House (Office of Science & Technology Policy and the National Economic Council) contradict the dour picture critics paint about American broadband. See the report Four Years of Broadband Growth.



Don’t Europeans and East Asians have better and cheaper broadband than Americans?



This is the wrong question. The question we should ask ourselves instead is how well have nations applied broadband technologies to improve their economy and standard of living? With all this discussion about speed, some consider ultra-high-speed wired broadband for its own sake, as an end in itself.  But bandwidth alone does not an economy make.   Instead we need to envision broadband as in important input to the information economy ecosystem.



Citing a report from the New America Foundation, Crawford asserts that Americans pay ”three or four times” more for the same download services as in other countries.  The fact of the matter is that I can find broadband prices both higher and lower around the world. The website of the leading Danish broadband provider TDC offers a package of 24 channels, 20Mbps broadband, and either fixed telephony or 4 hours mobile telephony for for 414 DKK ($58.73 + 25% tax = $73.41). There is a one-time fee of 399 DKK ($70 + 25% tax=$88).The similar monthly  package goes for $60-$70 in the US. The next level package of 50 Mbps is $80. So in this example, broadband is only slightly more expensive, depending on the local tax, than in the US. Indeed the OECD points out Spain and Norway as some of the most expensive countries for broadband.  Keep in mind as well that most of Denmark’s residents live in the major cities in apartments or in houses more closely packed that a typical American suburb, which also explains some of the price difference.



To be sure, we can find countries where broadband may be less expensive, but gasoline is four times as high. Local conditions and taxation will change the price. For this reason economists use a basket of goods and services to when evaluating consumer prices.  The market price of broadband in two countries may not reflect the same inputs.  The price can vary for many reasons including the network type, the network speed, the type of subscriber (individual, business, company etc), whether the item is sold in bundle, whether the subscriber has a certain exemption, taxes, and other factors not limited to geography, density and so on.  Economists and financial analysts who study prices build complex, dynamic models to reflect these factors.



The OECD provides the most comprehensive, global information on broadband prices, but it relies of national governments to provide the data, rather than collecting it directly from retailers or websites.  This challenge to determine “the facts” is also exacerbated by competing sources.  Indeed the most comprehensive source of broadband information comes from the Organization of Economic Cooperation and Development (OECD).  Their Broadband Portal offers a wealth of data on many broadband measurements.  Plus the new OECD Communications Outlook  published in July 2013, has the most recent comparison for prices globally.



Other positive information about the US appears in the the most recent from OECD report.  It notes the decline in the price of megabit per second of advertised speed. In the 2011 report showed that the US ranged between $1.10-71.49, but that number has fallen to $0.53-$41.70 in the 2013 report. That translates to a 51% improvement at the low end and a 41% improvement at the high end.   Some countries have lower prices, but the decline of the price for the US shows that thing are getting better, not worse, for broadband.



Let’s look at the mobile example. Using information from Bernstein Research and analyst Craig Moffett, Crawford asserts that mobile prices are too high.  On this point, one should defer to the GSM, the global standards organization for the mobile industry. Its report on mobile in the US and Europe notes that yes, Americans do pay more for mobile than Europeans ($69 vs. $38 for an average monthly subscription), but Americans use five times more voice and twice as much data.  From a G-20 perspective the OECD notes that “Given that mobile broadband constitutes a relatively new market compared to fixed broadband, there tends to be greater experimentation in wireless markets. Moreover, the evolution of the smartphone ecosystem has resulted in a complex array of stakeholders who determine these prices.“ [3]



Furthermore Europeans may have lower prices for mobile, but this is because wholesale rates are regulated to be artificially low. European consumers have a low price in the short term, but in the long run they are shortchanged because European carriers haven’t made enough profits to invest in infrastructure.  This is outcome of the “services based competition model” (allowing new entrants to resell incumbent’s services at a low price) which hasn’t panned out to deliver the infrastructure investments as hoped. Clinton Administration economist Ev Ehrlich who also studies this issue published his op-ed The Myth of America’s Inferior Broadband in the Wall Street Journal describing this situation.



Finally, roaming prices are still not harmonized, so when a European travels from one country to another, there are surcharges on calls and SMS. Imagine if you were charged a different rate each time you entered a new state in the US.  Such is the case in Europe.



OECD updated pricing information published in July 2013 notes that entry level prices should be no more than $30 purchasing power parity.  The America entry level monthly price is $27.Thereafter, if people want faster speeds, they pay for it.  That is only fair.  This means that for as little as $27, people can be assured bandwidth to do essential email and web browsing for job applications, online banking, and so forth.  A forthcoming blog post will investigate the issue of low-income Americans for whom $27 is too much.



As for people who pay $100 or more per month for broadband, which amounts to cost of a daily visit to Starbucks, that number should be put into perspective by measuring it against the cost to purchase the same content and communication services piecemeal.  One would have to add up the price of all the newspaper subscriptions, the movie tickets, the DVDs, the CDs, the long distance calls as well as to add some kind of premium to cover the applications we have today that never existed before the web.  On balance, broadband is a tremendous value in the US. When one sees cheaper prices in other countries, one needs to consider that often these citizens pay three times for broadband: with subscriptions, with rent/home owners fees, and with taxes.  The benefit for Americans is that they pay for broadband once, and they pay what it costs.



High Speed Broadband Adoption



From the research perspective, the leaders of my institute, The Center for Communication, Media and Information Studies in Copenhagen, published a report titled “Broadband Bandwidths in a 2020 Perspective” reflecting on the needs of developed countries such as the US. Their assessment is speeds are increasing faster than consumers demand them. The report notes that in Denmark, one of the perennial top performing countries in the OECD for broadband adoption, 65% of homes are passed by a broadband technology that can deliver 100 Mbps, but only 0.7% subscribe to the fastest tier.[4]  Danes can get what they need from lesser speeds, and the price of the faster service is not justified from their perspective.



Even during the financial crisis in 2009, the supposed low point of performance of broadband speed, activity on America’s broadband networks was in full swing.  At the time I worked for a web analytics software company in Silicon Valley. Our software was enabled on over 2000 enterprise websites visited by millions of Americans every day.  These websites ran the gamut: ecommerce, news, banking, education, student financial aid applications, B2B, video-embedded media, and so on.  Never once did our customers complain that there a was not sufficient broadband for end users’ needs, that they were missing out on customers because of lack of broadband access, or that speeds were too slow. On the contrary, broadband had enabled new markets.  Thus these companies wanted to deploy every additional advantage, including search marketing, behavioral targeting, multivariate testing and so on.  Even more impressive was that almost none of these companies were based in major cities or even Silicon Valley.  In that way, broadband brought the death of distance.



Broadband and Employment



We can learn a lot of the folly of the broadband for its own sake mentality from South Korea, #1 in Akamai’s study with an average measured speed of 45 Mbps. Their primary uses of broadband are by far video game entertainment for consumers and video conferencing for businesses.  The problem with these two applications is that they drive little revenue versus the traffic they consume on the web.  Much of real time entertainment is piracy, and the money in games is largely in the hardware.  As for online gaming, less than 5% of players pay for games.  Video conferencing was thought to be a great revenue opportunity for platform providers, but users are choosing free versions of Skype instead.  So these two endeavors don’t generate the cash flow that create jobs.



Broadband has enabled some industrial productivity and supports a marginal “Gangam Style” entertainment economy in South Korea.  It is estimated that performer Psy made about $8 million from his famous song, including the 1.6 billion YouTube views and the iTunes sales.[5]  Few performers will ever achieve that level of success. His is not a replicable business model, let alone a business case for broadband.  The real money in South Korea’s economy still comes from electronics, automobiles, shipbuilding, semiconductors, steel, and chemicals — the same growth engines from the pre-broadband days. Ditto for Japan and Sweden.



Most important, the national broadband project in South Korea has not yielded the jobs that were expected. Broadband has enabled entertainment but not employment. A new report by the Korea Information Society Development Institute, “A Study on the Impact of New ICT Service and Technology on Employment,” bemoans the situation of “jobless growth.” The government is also concerned about internet addiction, which afflicts some 10 percent of the country’s children aged between 10 and 19, who essentially function only for online gaming but not in other areas of society.



Europe also has challenges translating broadband into employment. My colleague at the Ifo Center for the Economics of Education and Innovation in Munich published the results of her econometric study of the impact of broadband internet on employment on 8460 municipalities in West Germany. Over the last five years the German government has invested €454 million (almost $600 million) to bring broadband to the rural areas. Though there is an impact on local employment by local broadband infrastructure, the impact is very slight. The econometric study shows that an increase of DSL penetration by 10% yields between 0.03-0.16% increase in employment. This research suggests is that broadband alone is not enough to stimulate employment. Other factors such as level of education, professional skills, existing employment opportunities, types of extant industries and so on also play a role in employment.



Broadband and Economic Growth



What is important about broadband is not measuring speeds and counting rank, but turning technology in productive use in the economy. In spite of all of the challenges of broadband, America leads the world in broadband-based industries.  Mary Meeker of Kleiner Perkins Caulfield Byers assessed the world’s top internet companies, and found the US an unusually strong performer.  Of the top 25, the US had the most, 14; China, 3; Japan, 2; South Korea, 2; Russia, 2; and the UK and Argentina each have 1.[6]  The point is that the USA, with just a fraction of the world’s internet users and with an oversized investment (one-quarter of the world’s financial outlay in internet infrastructure) has been able to leverage broadband into over $1 trillion of market value in 2013 alone with just 14 companies.  This is a stunning achievement, and it does not even take into account all of the small and medium sized American companies that would have never existed without broadband.



Given Crawford’s supposition of alleged high prices that limit adoption and force consumers to slower speeds, I conclude the opposite after reviewing the data. Consumers have broadband at all price levels as well as speeds.  The vast economic growth and the transformation of the US from and industrial to an informational economy means that the US gets a lot of bang for its broadband buck.  Whatever the circumstances, the US has managed to turn broadband into productive use better than other nations.  To be sure, the broadband and economic development equation is complex, but it’s not true that Europeans and East Asians have it better when it comes to broadband. The next blog post addresses the question of whether fiber to the home will be the network of the future or whether network technologies will compete.







[1]http://www.nytimes.com/2013/06/16/opinion/sunday/no-country-for-slow-broadband.html?_r=1& and http://www.nytimes.com/2013/06/21/opinion/how-the-us-got-broadband-right.html
[2] http://transition.fcc.gov/Daily_Relea...
[3] http://www.oecd-ilibrary.org/science-...
[4]http://www.cmi.aau.dk/News/Show+news//broadband-bandwidths-in-a-2020-perspective.cid87641
[5] http://prezi.com/yd_nufg9y0nw/software-is-eating-the-world/?utm_source=website&utm_medium=prezi_landing_related_solr&utm_campaign=prezi_landing_related_author
[6] http://www.kpcb.com/insights/2013-internet-trends



 •  0 comments  •  flag
Share on Twitter
Published on August 05, 2013 09:01

Guest Blogging This Week: Roslyn Layton on Broadband Policy

roslyn-layton-247x300This week it is our pleasure to welcome Roslyn Layton to the TLF, who will be doing some guest blogging on broadband policy issues. Roslyn Layton is a PhD Fellow who studies internet economics at the Center for Communication, Media, and Information Technologies at Aalborg University in Copenhagen, Denmark.  Her program is a partnership between the Danish Department of Research & Innovation; Aalborg University, and Strand Consult, a Danish company.  Prior to her current academic position, Roslyn worked in the IT industry in the U.S., India, and Europe. Her personal page is: www.RoslynLayton.com



She’ll be rolling out three essays over the course of the week based on her extensive research research in this field, including her recent series on “10 Myths and Realities of Broadband Internet in the USA.”






 •  0 comments  •  flag
Share on Twitter
Published on August 05, 2013 08:50

August 1, 2013

On the Line between Technology Ethics vs. Technology Policy

10 commandmentsWhat works well as an ethical directive might not work equally well as a policy prescription. Stated differently, what one ought to do it certain situations should not always be synonymous with what they must do by force of law.



I’m going to relate this lesson to tech policy debates in a moment, but let’s first think of an example of how this lesson applies more generally. Consider the Ten Commandments. Some of them make excellent ethical guidelines (especially the stuff about not coveting neighbor’s house, wife, or possessions). But most of us would agree that, in a free and tolerant society, only two of the Ten Commandments make good law: Thou shalt not kill and Thou shalt not steal.



In other words, not every sin should be a crime. Perhaps some should be; but most should not. Taking this out of the realm of religion and into the world of moral philosophy, we can apply the lesson more generally as: Not every wise ethical principle makes for wise public policy.



Before I get accused of being accused of being some sort of nihilist, I want to be clear that I am absolutely not saying that ethics should never have a bearing on policy. Obviously, all political theory is, at some level, reducible to ethical precepts. My own political philosophy is strongly rooted in the Millian harm principle (“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.”) Not everyone will agree will Mill’s principle, but I would hope most of us could agree that, if we hope to preserve a free and open society, we simply cannot convert every ethical directive into a legal directive or else the scope of human freedom will need to shrink precipitously.



Can We Plan for Every “Bad Butterfly-Effect”?

Anyway, what got me thinking about all this and it its applicability to technology policy was an interesting Wired essay by Patrick Lin entitled, “The Ethics of Saving Lives With Autonomous Cars Are Far Murkier Than You Think.” Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo and lead editor of Robot Ethics (MIT Press, 2012). So, this a man who has obviously done a lot of thinking about the potential ethical challenges presented by the growing ubiquity of robots and autonomous vehicles in society. (His column makes for particularly fun reading if you’ve ever spent time pondering Asimov’s “Laws of Robotics.”)



Lin walks through various hypothetical scenarios regarding the future of autonomous vehicles and discusses the ethical trade-offs at work here. He asks a number of questions about a future of robotic cars and encourages us to give some thoughtful deliberation to the benefits and potential costs of autonomous vehicles. I will not comment here on all the specific issues that lead Lin to question whether they are worth it; instead I want to focus on Lin’s ultimate conclusion.



I commenting on the potential risks and trade-offs, Lin notes:



The introduction of any new technology changes the lives of future people. We know it as the “butterfly effect” or chaos theory: Anything we do could start a chain-reaction of other effects that result in actual harm (or benefit) to some persons somewhere on the planet.


That’s self-evident, of course, but what of it? How should that truism influence tech ethics and/or tech policy? Here are Lin’s thoughts:



For us humans, those effects are impossible to precisely predict, and therefore it is impractical to worry about those effects too much. It would be absurdly paralyzing to follow an ethical principle that we ought to stand down on any action that could have bad butterfly-effects, as any action or inaction could have negative unforeseen and unintended consequences.

But … we can foresee the general disruptive effects of a new technology, especially the nearer-term ones, and we should therefore mitigate them. The butterfly-effect doesn’t release us from the responsibility of anticipating and addressing problems the best we can.

As we rush into our technological future, don’t think of these sorts of issues as roadblocks, but as a sensible yellow light — telling us to look carefully both ways before we cross an ethical intersection.


Lin makes some important points here, but these closing comments (and his article more generally) have a whiff of “precautionary principle” thinking to it that makes me more than a bit uncomfortable. The precautionary principle generally holds that, because a new idea or technology could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms. Before we walk down that precautionary path, we need to consider the consequences.



The Problem with Precaution

I have spent a great amount writing about the dangers of precautionary principle thinking in my recent articles and essays, including my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in two lengthy blog posts asking the questions, “Who Really Believes in ‘Permissionless Innovation’?” and “What Does It Mean to ‘Have a Conversation’ about a New Technology?”



The key point I try to get across in those essays is that letting such precautionary thinking guide policy poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity. If public policy is guided at every turn by the precautionary principle, technological innovation is impossible because of fear of the unknown; hypothetical worst-case scenarios trump all other considerations. Social learning and economic opportunities become far less likely, perhaps even impossible, under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.



In Lin’s essay, we see some precautionary reasoning at work when he argues that “we can foresee the general disruptive effects of a new technology, especially the nearer-term ones, and we should therefore mitigate them” and that we have “responsibility [for] anticipating and addressing problems the best we can.”



To be fair, Lin caveats this by first noting that precise effects are “impossible to predict” and, therefore, that “It would be absurdly paralyzing to follow an ethical principle that we ought to stand down on any action that could have bad butterfly-effects, as any action or inaction could have negative unforeseen and unintended consequences.” Second, as it relates to general effects, he says we should just be “addressing problems the best we can.”



Despite those caveats, I continue to have serious concerns about the potential blurring of ethics and law here. The most obvious question I would have for Lin is: Who is the “we” in this construct?  Is it “we” as individuals and institutions interacting throughout society freely and spontaneously, or is it “we” as in the government imposing precautionary thinking through top-down public policy?



I can imagine plenty of scenarios in which a certain amount of precautionary thinking may be entirely appropriate if applied as an informal ethical norm at the individual, household, organizational or even societal level, but which would not be as sensible if applied as a policy prescription. For example, parents should take steps to shield their kids from truly offensive and hateful material on the Internet before they are mature enough to understand the ramifications of it. But that doesn’t mean it would be wise to enshrine the same principle into law in the form of censorship.



Similarly, there are plenty of smart privacy and security norms that organizations should practice that need not be forced on them by law, especially since such mandates would have serious costs if mandated. For example, I think that organizations should feel a strong obligation to safeguard user data and avoid privacy and security screw-ups. I’d like to see more organizations using encryption wherever they can in their systems and also delete unnecessary data whenever possible. But, for a variety of reasons, I do not believe any of these things should be mandated through law or regulation.



Don’t Foreclose Experimentation

While Lin rightly acknowledges the “negative unforeseen and unintended consequences” of preemptive policy action to address precise concerns, he does not unpack the full ramifications of those unseen consequences. Nor does he answer how the royal “we” separate the “precise” from the “general” concerns? (For example, are the specific issues I just raised in the preceding paragraphs “precise” or “general”? What’s the line between the two?)



But I have a bigger concern with Lin’s argument, as well with the field of technology ethics more generally: We rarely hear much discussion of the benefits associated with the ongoing process of trial-and-error experimentation and, more importantly, the benefits of failure and what we learn — both individually and collectively — from the mistakes we inevitably make.



The problem with regulatory systems is that they are permission-based. They focus on preemptive remedies that aim to forecast the future, and future mistakes (i.e., Lin’s “butterfly effects”) in particular.  Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things — including what we learn from failed efforts at doing things. But we will never discover better ways of doing things unless the process of evolutionary, experimental change is allowed to continue. We need to keep trying and failing in order to learn how we can move forward. As Samuel Beckett once counseled: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.” Real wisdom is born of experience, especially mistakes we make along the way.



This is why I feel so passionately about drawing a distinction between ethical norms and public policy pronouncements.  Law forecloses. It is inflexible. It does not adapt as efficiently or rapidly as social norms do. Ethics and norms provide guidance but also leave plenty of breathing room for ongoing experimentation, and they are refined continuous and in response to ongoing social developments.



It is worth noting that ethics evolve, too. There is a sort of ethical trial-and-error that occurs in society over time as new developments challenge, and then change, old ethical norms. This is another reason we want to be careful about enshrining norms into law.



Thus, policymakers should not be imposing prospective restrictions on new innovations without clear evidence of actual, not merely hypothesized, harm. That’s especially the case since, more often than not, human adapt to new technologies and find creative ways to assimilate even the most disruptive innovations into their lives. We cannot possibly plan for all the “bad butterfly-effects” that might occur, and attempts to do so will result in significant sacrifices in terms of social and economic liberty.



The burden of proof should be on those who advocate preemptive restrictions on technological innovation to show why freedom to tinker and experiment must be foreclosed by policy. There should exist the strongest presumption that the freedom to innovate and experiment will advance human welfare and teach us new and better ways of doing things to overcome most of those “bad butterfly-effects” over time.



So, in closing, let us yield at Lin’s “sensible yellow light — telling us to look carefully both ways before we cross an ethical intersection.” But let us not be cowed into an irrational fear of an unknown and ultimately unknowable future. And let us not be tempted to try to plan for every potential pitfall through preemptive policy prescriptions, lest progress and prosperity get sacrificed as a result of such hubris.




 •  0 comments  •  flag
Share on Twitter
Published on August 01, 2013 07:32

The right role for government in cybersecurity

Today the Heartland Institute is publishing my policy brief, U.S. Cybersecurity Policy: Problems and Principles, which examines the proper role of government in defending U.S. citizens, organizations and infrastructure from cyberattacks, that is, criminal theft, vandalism or outright death and destruction through the use of global interconnected computer networks.



The hype around the idea of cyberterrorism and cybercrime is fast reaching a point where any skepticism risks being shouted down as willful ignorance of the scope of the problem. So let’s begin by admitting that cybersecurity is a genuine existential challenge. Last year, in what is believed to be the most damaging cyberattack against U.S. interests to date, a large-scale hack of some 30,000 Saudi Arabia-based ARAMCO personal computers erased all data on their hard drives. A militant Islamic group called the Sword of Justice took credit, although U.S. Defense Department analysts believe the government of Iran provided support.



This year, the New York Times and Wall Street Journal have had computer systems hacked, allegedly by agents of the Chinese government looking for information on the newspapers’ China sources. In February, the loose-knit hacker group Anonymous claimed credit for a series of hacks of the Federal Reserve Bank, Bank of America, and American Express, targeting documents about salaries and corporate financial policies in an effort to embarrass the institutions. Meanwhile, organized crime rings are testing cybersecurity at banks, universities, government organizations and any other enterprise that maintains databases containing names, addresses, social security and credit card numbers of millions of Americans.



These and other reports, aided by popular entertainment that often depicts social breakdown in the face of massive cyberattack, have the White House and Congress scrambling to “do something.” This year alone has seen Congressional proposals such as Cyber Intelligence Sharing and Protection Act (CISPA), the Cybersecurity Act and a Presidential Executive Order all aimed at cybersecurity. Common to all three is a drastic increase the authority and control the federal government would have over the Internet and the information that resides in it should there be any vaguely defined attack on any vaguely defined critical U.S. information assets.





Yet we skeptics recently gained some ammo. McAfee, the security software manufacturer, recently revised its estimate of annual U.S. losses attribute to cybercrime downward to $100 billion, just one-tenth of the staggering $1 trillion it estimated in 2009. This is significant because both President Barack Obama and Gen. Keith Alexander, head of U.S. Cyber Command, have invoked the $1 trillion figure to justify greater government control of the Internet.



To be sure, $100 billion is hard to dismiss, but the figure is comparable to other types of losses U.S. businesses confront. For example, auto accidents result in annual losses between $99 billion and $168 billion. So while cybersecurity is a problem that needs to be addressed, we should be careful about the way we enlist the government to do so.



We should start by questioning the rush to create new laws that have vague definitions and poor measurables for success, yet give the government sweeping powers to collect private information from third parties. The NSA’s massive collection of phone and ISP data on millions of Americans—all done within the legal scope of the PATRIOT Act—should itself give pause to anyone who thinks it’s a good idea to expand the government’s access to information on citizens.



What’s more, vaguely-written law opens the door to prosecutorial abuse. My paper goes into more detail about how federal prosecutors used the Computer Fraud and Abuse Act to pile felony charges on Aaron Swartz, the renown young Internet entrepreneur and co-creator of the social news site Reddit, for what was an act of civil disobedience that entailed, at worst, physical trespassing and a sizable, but not far from damaging, violation of the terms of MIT’s JSTOR academic journal indexing service.



There may indeed be some debate over the legal and ethical scope of Swartz’s actions, but they were not aimed at profit or disruption. Yet the federal government decided to use a law designed to protect the public from sophisticated criminal organizations of thieves and fraudsters against a productive member of the Internet establishment, threatening him with 35 years in prison and loss of all rights to use computers and Internet for life. Swartz, who was plagued by depression, committed suicide before his case was adjudicated. Prosecutors exonerated him posthumously by dropping all charges, but controversy over the handling of the case continues to this day. (Also, a hat tip to Jerry Brito’s conversation with James Grimmelman on his Surprisingly Free podcast.)  .



Proper cybersecurity policy begins with understanding that there’s a limit to what government can do to prevent cybercrime or cyberattacks. Cybersecurity should not be seen as something disassociated with physical safety and security. And, for the most part, physical security is understood to entail personal responsibility. We lock our homes and garages, purchase alarm systems and similar services, and don’t leave valuables in plain sight. Businesses contract with private security companies to safeguard employees and property. Government law enforcement can be effective after the fact – investigating the crime and arresting and prosecuting the perpetrators – but police are not routinely deployed to protect private assets.



Similarly, it should not be the government’s job to protect private information assets. As with physical property, that responsibility falls to the property owner. Of course, we must recognize the government at all levels is an IT user and a custodian of its citizens’ data. As users with an interest in data protection, federal, state and local government information security managers deserve a place at the table—but as partners and stakeholders, not a dictators.



Since the first computers were networked, cybersecurity has best been managed through evolving best practices that involve communication across the user community. And yes, despite what the President and many members of Congress think, enterprises do share information about cyberattacks. For years they have managed to keep systems secure without turning vast quantities of personal data on clients and customers over the government absent due process or any judicial warrant.



In terms of lawmaking, cybercriminal law should be treated as an extension of physical criminal law. Theft, espionage, vandalism and sabotage were recognized as crimes long before computers were invented. The legislator’s job is first to determine how current law can apply to new methods used to carry off age-old capers, amending where necessary, as opposed to creating a new category of badly-written laws.



If any new laws are needed, they should be written to punish and deter acts that involve destruction and loss. The severity of the penalties must be consonant with the severity of the act. The law must come down hard on deliberate theft, destruction, or other clear criminal intent. Well-written law will ensure that prosecutorial resources are devoted to stopping organized groups of criminals who use email scams to drain the life savings of pensioners, not to relentlessly pursue a lone activist who, as an act of protest, downloaded and posted public-record local government documents that proved embarrassing to local elected officials.



Finally, my paper also addresses acts of cyberterrorism and cyberwar, which can exceed the reach of domestic law enforcement and involve nation-states or stateless organizations such as Al-Qaida. Combatting international cyberterrorism involves diplomacy and cooperation with allies—as well as rethinking the rules of engagement regarding response to an attack.



While it is wise to have appropriate defenses in place, before rushing to expand FISA courts or demand Internet “kill switches,” we need a calmer discussion of the likelihood of a devastating act of cyberterrorism, such as hacking into air traffic control or attacking the national power grid. Despite popular notions, attacks of this caliber cannot be carried out by a lone individual with a laptop and a public WiFi connection. An attacker would need considerable resources, the cooperation of a large number of insiders, and would have to rely on a number of factors outside his control. For more, I refer readers to a SANS Institute paper and a more recent article in Slate. Both discuss the logistics involved in a number of cyberterrorism scenarios. Suffice it to say, a terrorist can accomplish more with an inexpensive yet well-placed bomb than a time-consuming multi-stage hack that risks both failure and exposure.



The most important takeaway, however, is that today’s cybersecurity challenges can be met within a constitutional framework that respects liberty, privacy, property and legal due process. Author Eric Foner has written that since the nation’s founding, its most important organizing principle has been to maintain civil law and order within a structure of limited government powers and respect for individual rights. There is no reason this balance needs to be adjusted to favor state power at the expense of individual rights in combating computer crime or defending the nation’s information systems from foreign attack.



Related Articles:



Robert Samuelson Engages in a Bit of Argumentum in Cyber-Terrorem



CISPA’s vast overreach



Rise of the cyber-industrial complex



Do we need a special government program to help cybersecurity supply meet demand?



 




 •  0 comments  •  flag
Share on Twitter
Published on August 01, 2013 05:03

July 30, 2013

How much do FCC tethering rules matter?

Over at The Switch, the Washington Post’s excellent new technology policy blog, Brian Fung has an interesting post about tethering and Google Glass, but I think he perpetuates a common misconception:



Carriers have all sorts of rules about tethering, and sorting through them can be like feeling your way down a dark alley. Verizon used to charge $20 a month for tethering before the FCC ruled it had to allow tethering for free. Now, any data you use comes out of your cellular plan’s overall data allowance. AT&T gives you a separate pool of data for tethering plans, but charges up to $50 a month for the right, much as Verizon once did.


Fung claims that due to the likely increase in tethering as devices like Google Glass come to market, “assuming the FCC didn’t require all wireless carriers to make tethering free, it’d be a huge source of potential revenue for companies like AT&T.”



In fact, the cost of tethering on AT&T is not very different from the cost of doing so on Verizon, which means by definition that AT&T is not likely to get a windfall from increased use of tethering. It’s also evidence that the FCC tethering rule for Verizon doesn’t matter very much.





Let’s look first at the state of tethering on Verizon. New post-paid consumer contracts on Verizon must be for the new Share Everything plans that first came out last year. The plans charge a monthly line access fee per device, which includes unlimited calling and texting, and a monthly fee for data, which includes tethering. You decide which devices you want to connect and how much data you want to use. Let’s say you have 2 smartphones that each use 3GB of data per month. You pay $40/device and $80 for data per month, for a total of $160/month. Again, allegedly because of FCC rules, this plan includes tethering.



AT&T has comparable plans, called Mobile Share. The pricing is a little different, because AT&T charges a different amount per line depending on how much data you get. But if you want 2 smartphones and 6 total GB of data, it costs you $160/month, the same as on Verizon. And guess what, the AT&T plan includes tethering, even though the FCC doesn’t mandate that AT&T provide it.



Unlike Verizon, AT&T still offers its legacy plans to new customers. These plans do not come with free tethering, but the additional cost of tethering is at most $20 per line. Tethering is included if you pay for 5GB of data, and the upgrade from 3GB of data to 5GB of data is from $30 to $50. And that $20 upgrade cost includes 2GB of extra data. But if you want 2 smartphone lines with unlimited calling and texting, with 3GB per line and no tethering, it costs $210/month under the legacy plan. So at least for some users, switching to the Mobile Share plan is both cheaper and comes with the added bonus of free tethering.



When you consider that a) Verizon doesn’t even offer legacy plans any more, and b) many consumers, especially heavy callers and texters, are better off under the Mobile Share plans anyway, it becomes clear that tethering is not really more lucrative for AT&T than for Verizon. The FCC’s tethering mandate for Verizon did not make tethering much cheaper on Verizon than on AT&T, because there is actually fierce competition between Verizon and AT&T. If anything, the mandate probably incentivized Verizon to ditch their legacy plans for new customers, restricting consumer choice. But the bottom line is that, contra Fung, tethering is not likely to be a major source of revenue for AT&T absent FCC intervention.




 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2013 10:05

Jerry Ellig on the Universal Service Fund

Post image for Jerry Ellig on the Universal Service Fund

Jerry Ellig, senior research fellow at the Mercatus Center at George Mason University, discusses the the FCC’s lifeline assistance benefit funded through the Universal Service Fund (USF). The program, created in 1997, subsidizes phone services for low-income households. The USF is not funded through the federal budget, rather via a fee from monthly phone bills — reaching an all-time high of 17% of telecomm companies’ revenues last year. Ellig discusses the similarities between the USF fee and a tax, how the fee fluctuates, how subsidies to the telecomm industry have boomed in recent years, and how to curb the waste, fraud and abuse that comes as a result of the lifeline assistance benefit.



Download



Related Links


Jerry Ellig’s Biography Mercatus Center


Benefits, Costs and other Important Stuff-Demystifying Regulatory Analysis Ellig, Mercatus Center

The Future of Regulation Ellig, Pepperdine University School of Public Policy



 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2013 03:00

July 29, 2013

No Competitive Magic in Spectrum Caps

The 600 MHz spectrum auction “represents the last best chance to promote competition” among mobile wireless service providers, according to the written testimony of T-Mobile executive who appeared before a congressional subcommittee Jul. 23 and testified in rhetoric that is reminiscent of a bygone era.



The idea that an activist Federal Communications Commission is necessary to preserve and promote competition is a throwback to the government-sanctioned Ma Bell monopoly era.  Sprint still uses the term “Twin Bells” in its FCC pleadings to refer to AT&T and Verizon Wireless in the hope that, for those who can remember the Bell System, the incantation will elicit a visceral response.  The fact is most of the FCC’s efforts to preserve and promote competition have failed, entailed serious collateral damage, or both.



Unless Congress and the FCC get the details right, the implementation of an innovative auction that will free up spectrum that is currently underutilized for broadcasting and make it available for mobile communications could fail to raise in excess of $7 billion for building a nationwide public safety network and making a down payment on the national debt.  Aside from ensuring that broadcasting is not disrupted in the process, one important detail concerns whether the auctioning will be open to every qualified bidder, or whether government officials will, in effect, pick winners and losers before the auctioning begins.



Wireless carriers T-Mobile and Sprint, both of which freely chose to load up on above-1 GHz spectrum in the past, now want to improve their access to low-frequency spectrum by capping the amount of spectrum below 1 GHz that any one carrier can hold.  T-Mobile proposes that no carrier be permitted to hold more than one-third of the spectrum below 1 GHz.  A cap would hit AT&T and Verizon Wireless. A lawyer for T-Mobile has even suggested the possibility that if regulators don’t ambush AT&T and Verizon Wireless going into the 600 MHz auction, then T-Mobile and perhaps others may not participate at all.



Fewer bidders could lead to lower auction proceeds, and the idea of T-Mobile and perhaps others sitting out the auction is intended to moot the objection that excluding AT&T and Verizon Wireless would depress bidding by suggesting that government and taxpayers can’t have it both ways. Either AT&T and Verizon Wireless are allowed to bid; OR, if the coast is clear, T-Mobile, Sprint and perhaps others will bid.  An auction in which all qualified bidders participate, apparently, may be out of the question.



Meanwhile, Sprint wants the FCC to classify, for regulatory purposes, the massive spectrum available to Sprint and its wholly-owned subsidiary Clearwire above 1 GHz as of inferior marketplace value and therefore not subject to any cap whatsoever—a policy that would let Sprint off the hook when it seeks to acquire more spectrum for itself in the future.



The implication of the T-Mobile and Sprint advocacy is that AT&T’s and Verizon Wireless’ low-frequency spectrum holdings confer a decisive competitive advantage; and that both T-Mobile and Sprint, through no fault of their own, could be irretrievably crippled if policymakers don’t intervene to ensure that they can immediately acquire significant amounts of low-frequency spectrum.



Ideally, any carrier would prefer to offer mobile wireless services using a combination of low- and high-frequency spectrum (see, e.g., the FCC’s Sixteenth Wireless Competition Report at page 17).  In the long run, T-Mobile and Sprint/Clearwire could theoretically improve operational flexibility by acquiring more of the low-frequency spectrum—although the competitive significance of this added flexibility is becoming less obvious as a result of changes in technology and consumer demand.



On the other hand, given that the upcoming 600 MHz auction comprises the only source of new spectrum on the horizon, were a cap to be imposed now it could inflict a severe hardship on AT&T and Verizon Wireless in the short- to medium-term.



That’s because in terms of network congestion, T-Mobile and Sprint currently have the competitive advantage.  Their networks are less congested.  Earlier this month, Sprint announced unlimited voice, text and data plans, something that would not be possible on a congested network.  And T-Mobile’s current advertising claims that T-Mobile’s network “delivers 50% more bandwidth than AT&T for significantly less congestion.”



What T-Mobile and Sprint actually fear is not the possibility that they will be unfairly foreclosed from acquiring more spectrum, but the possibility that AT&T and Verizon Wireless will be able to obtain additional spectrum for relieving network congestion, and that as a result there will be fewer dissatisfied AT&T and Verizon Wireless customers for Sprint and T-Mobile to poach.



The U.S. Department of Justice, which has warned of a possibility that AT&T or Verizon Wireless could try to act anticompetitively to “foreclose” a competitor from acquiring needed spectrum, is oblivious to the reality of network congestion as it relates in particular to AT&T and Verizon Wireless.



An obvious example of anticompetitive foreclosure occurs when a firm acquires an essential input for no other purpose but to keep it out of the hands of a competitor, not because the acquiring firm needs or intends to use the input itself.  Not only does the FCC have rules that prohibit stockpiling, or “warehousing”; but AT&T and Verizon Wireless have valid commercial purpose for acquiring more spectrum to alleviate network congestion.  If anything, AT&T and Verizon Wireless are the most likely victims of a foreclosure strategy.



Depriving the two most popular wireless service providers of additional spectrum by operation of regulation is a foreclosure strategy in which government is the bad actor.  If the FCC aids and abets Sprint’s  and T-Mobile’s invitation to foreclose on their competitors in the hope of forcing customers of AT&T and Verizon Wireless to jump ship without the need for offering them lower prices or other inducements to switch, then it will be an egregious example of government picking winners and losers.



Until recently, both Sprint and T-Mobile were losing customers and struggling to attract investment capital.  Whatever justification there might have been for government intervention a short time ago has been overtaken by events.



The FCC has approved an acquisition of Sprint by Softbank, a deal which provides $5 billion for Sprint to invest in network and service improvements, as well as Sprint’s acquisition of 100 percent of the stock of Clearwire.



T-Mobile is reinventing itself with innovative pricing and service plans, a network upgrade and the iPhone.  Company officials expect to halt the defection of contract customers that began in 2009 by the end of this year, and begin adding customers in 2014.



“T-Mobile’s innovative moves are putting pressure on our competitors, forcing other carriers – including AT&T and Verizon – to follow suit and start treating their own customers differently,” according to the Congressional testimony of the T-Mobile executive who testified Jul. 23.  “That’s what healthy competition achieves.”



T-Mobile’s recent success in the marketplace vindicates the U.S. Department of Justice’s assertion that “each of the Big Four nationwide carriers is especially well-positioned to drive competition” when it sued to block a proposed merger between T-Mobile and AT&T in 2011—unless the price of an independent T-Mobile is a continuing need for special treatment conferred by regulators at the expense of competitors.



The FCC and the Antitrust Division have successfully justified ongoing Congressional appropriations for years arguing that they are responsible for competition in telecommunications, and that they can play a continuing vital role in preserving and promoting competition.  In reality, technological development has made it possible for telephone companies, cable operators and wireless providers to compete with one another and  develop broadband Internet  services.  The only useful role the agencies have played is peeling back regulatory barriers that prevent competition or discourage private investment—later rather than sooner.



But as long as the agencies manage to hold themselves out, illogically, as indispensable protectors and promoters of competition, special interests are going to exploit the possibilities.  Competition was supposed to substitute for and not supplement regulation, and the proposals for spectrum aggregation provide yet another example of why it is time for Congress to complete the deregulation of telecommunications, a process that has been going on for, oh, about 30 years.




 •  0 comments  •  flag
Share on Twitter
Published on July 29, 2013 16:06

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.