Adam Thierer's Blog, page 44

May 11, 2014

The Gravest Threat To The Internet

Allowing broadband providers to impose tolls on Internet companies represents a “grave” threat to the Internet, or so wrote several Internet giants and their allies in a letter to the Federal Communications Commission this past week.


The reality is that broadband networks are very expensive to build and maintain.  Broadband companies have invested approximately $250 billion in U.S. wired and wireless broadband networks—and have doubled average delivered broadband speeds—just since President Obama took office in early 2009.  Nevertheless, some critics claim that American broadband is still too slow and expensive.


The current broadband pricing model is designed to recover the entire cost of maintaining and improving the network from consumers.  Internet companies get free access to broadband subscribers.


Although the broadband companies are not poised to experiment with different pricing models at this time, the Internet giants and their allies are mobilizing against the hypothetical possibility that they might in the future.  But this is not the gravest threat to the Internet.  Broadband is a “multisided” market like newspapers.  Newspapers have two sets of customers—advertisers and readers—and both “pay to play.”  Advertisers pay different rates depending on how much space their ads take up and on where the ads appear in the newspaper.  And advertisers underwrite much of the cost of producing newspapers.


Or perhaps broadband providers might follow the longstanding practice of airlines that charge more than one price on the same flight.  In the early days of air travel, passengers only had a choice of first class.  The introduction of discounted coach fares made it affordable for many more people to fly, and generated revenue to pay for vastly expanded air service.


Broadband companies voluntarily invest approximately $65 billion per year because they fundamentally believe that more capacity and lower prices will expand their markets.  “Foreign” devices, content and applications are consistent with this vision because they stimulate demand for broadband.


The Internet giants and their allies oppose “paid prioritization” in particular.  But this is like saying the U.S. Postal Service shouldn’t be able to offer Priority or Express mail.


One of the dangers in cementing the current pricing model in regulation under the banner of preserving the open Internet is that of prohibiting alternative pricing strategies that could yield lower prices and better service for consumers.


FCC Chairman Tom Wheeler intends for his agency to begin a rulemaking proceeding this week on the appropriate regulatory treatment of broadband.  Earlier this month in Los Angeles, Wheeler said the FCC will be asking for input as to whether it should fire up “Title II.”


Wheeler was referring to a well-known section of the Communications Act of 1934 centered around pricing regulation that buttressed the Bell System monopoly and gave birth to the regulatory morass that afflicted telecom for decades.  A similar version of suffocating regulation was imposed on the cable companies in 1992 in a quixotic attempt to promote competition and secure lower prices for consumers.


Then, as now, cable and telephone companies were criticized for high prices, sub-par service and/or failing to be more innovative.  And regulation didn’t help.  There was widespread agreement that other deregulated industries were outperforming the highly-regulated cable and telecom companies.


By 1996, Congress overwhelmingly deemed it necessary to unwind regulation of both cable and telephone firms “in order to secure lower prices and higher quality services for American telecommunications  consumers and encourage the rapid deployment of  new telecommunications technologies.”


With this history as a guide, it is safe to assume not only that the mere threat of a new round of price regulation could have a chilling effect on the massive private investment that is still likely to be needed for expanding bandwidth to meet surging demand, and that enactment of such regulation could be a disaster.


Diminished investment is the gravest threat to the Internet, because reduced investment could lead to higher costs, congestion, higher prices and fewer opportunities for makers of devices, content and applications to practice innovation.


 


 •  0 comments  •  flag
Share on Twitter
Published on May 11, 2014 20:35

May 8, 2014

Killing TV Stations Is the Intended Consequence of Video Regulation Reform

Today is a big day in Congress for the cable and satellite (MVPDs) war on broadcast television stations. The House Judiciary Committee is holding a hearing on the compulsory licenses for broadcast television programming in the Copyright Act, and the House Energy and Commerce Committee is voting on a bill to reauthorize “STELA” (the compulsory copyright license for the retransmission of distant broadcast signals by satellite operators). The STELA license is set to expire at the end of the year unless Congress reauthorizes it, and MVPDs see the potential for Congressional action as an opportunity for broadcast television to meet its Waterloo. They desire a decisive end to the compulsory copyright licenses, the retransmission consent provision in the Communications Act, and the FCC’s broadcast exclusivity rules — which would also be the end of local television stations.


The MVPD industry’s ostensible motivations for going to war are retransmission consent fees and television “blackouts”, but the real motive is advertising revenue.


The compulsory copyright licenses prevent MVPDs from inserting their own ads into broadcast programming streams, and the retransmission consent provision and broadcast exclusivity agreements prevent them from negotiating directly with the broadcast networks for a portion of their available advertising time. If these provisions were eliminated, MVPDs could negotiate directly with broadcast networks for access to their television programming and appropriate TV station advertising revenue for themselves.


The real motivation is in the numbers. According to the FCC’s most recent media competition report, MVPDs paid a total of approximately $2.4 billion in retransmission consent fees in 2012. (See 15th Report, Table 19) In comparison, TV stations generated approximately $21.3 billion in advertising that year. Which is more believable: (1) That paying $2.4 billion in retransmission consent fees is “just not sustainable” for an MVPD industry that generated nearly $149 billion from video services in 2011 (See 15th Report, Table 9), or (2) That MVPDs want to appropriate $21.3 billion in additional advertising revenue by cutting out the “TV station middleman” and negotiating directly for television programming and advertising time with national broadcast networks? (Hint: The answer is behind door number 2.)


What do compulsory copyright licenses, retransmission consent, and broadcast exclusivity agreements have to do with video advertising revenue?



The compulsory copyright licenses prohibit MVPDs substituting their own advertisements for TV station ads: Retransmission of a broadcast television signal by an MVPD is “actionable as an act of infringement” if the content of the signal, including “any commercial advertising,” is “in any way willfully altered by the cable system through changes, deletions, or additions” (see 17 U.S.C. § 111(c)(3)119(a)(5), and 122(e));
The retransmission consent provision prohibits MVPDs from negotiating directly with television broadcast networks for access to their programming or a share of their available advertising time: An MVPD cannot retransmit a local commercial broadcast television signal without the “express authority of the originating station” (see 47 U.S.C. § 325(b)(1)(A)); and
Broadcast exclusivity agreements (also known as non-duplication and syndicated exclusivity agreements) prevent MVPDs from circumventing the retransmission consent provision by negotiating for nationwide retransmission consent with one network-affiliated own-and-operated TV station. (If an MVPD were able to retransmit the TV signals from only one television market nationwide, MVPDs could, in effect, negotiate with broadcast networks directly, because broadcast programming networks own and operate their own TV stations in some markets.)

The effect of the compulsory copyright licenses, retransmission consent provision, and broadcast exclusivity agreements is to prevent MVPDs from realizing any of the approximately $20 billion in advertising revenue generating by broadcast television programming every year.


Why did Congress want to prevent MVPDs from realizing any advertising revenue from broadcast television programming?


Congress protected the advertising revenue of local TV stations because TV stations are legally prohibited from realizing any subscription revenue for their primary programming signal. (See 47 U.S.C. § 336(b)) Congress chose to balance the burden of the broadcast business model mandate with the benefits of protecting their advertising revenue. The law forces TV stations to rely primarily on advertising revenue to generate profits, but the law also protects their ability to generate advertising revenue. Conversely, the law allows MVPDs to generate both subscription revenue and advertising revenue for their own programming, but prohibits them from poaching advertising revenue from broadcast programming.


MVPDs want to upset the balance by repealing the regulations that make free over-the-air television possible without repealing the regulations that require TV stations to provide free over-the-air programming. Eliminating only the regulations that benefit broadcasters while retaining their regulatory burdens is not a free market approach — it is a video marketplace firing squad aimed squarely at the heart of TV stations.


Adopting the MVPD version of video regulation reform would not kill broadcast programming networks. They always have the options of becoming cable networks and selling their programming and advertising time directly to MVPDs or distributing their content themselves directly over the Internet.


The casualty of this so-called “reform” effort would be local TV stations, who are required by law to rely on advertising and retransmission consent fees for their survival. Policymakers should recognize that killing local TV stations for their advertising revenue is the ultimate goal of current video reform efforts before adopting piecemeal changes to the law. If policymakers intend to kill TV stations, they should not attribute the resulting execution to the “friendly fire” of unintended consequences. They should recognize the legitimate consumer and investment-backed expectations created by the current statutory framework and consider appropriate transition mechanisms after a comprehensive review.


 •  0 comments  •  flag
Share on Twitter
Published on May 08, 2014 06:22

May 7, 2014

Crovitz on The End of the Permissionless Web

Few people have been more tireless in their defense of the notion of “permissionless innovation” than Wall Street Journal columnist L. Gordon Crovitz. In his weekly “Information Age” column for the Journal (which appears each Monday), Crovitz has consistently sounded the alarm regarding new threats to Internet freedom, technological freedom, and individual liberties. It was, therefore, a great honor for me to wake up Monday morning and read his latest post, “The End of the Permissionless Web,” which discussed my new book “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”


“The first generation of the Internet did not go well for regulators,” Crovitz begins his column. “Despite early proposals to register websites and require government approval for business practices, the Internet in the U.S. developed largely without bureaucratic control and became an unstoppable engine of innovation and economic growth.” Unfortunately, he correctly notes:


Regulators don’t plan to make the same mistake with the next generation of innovations. Bureaucrats and prosecutors are moving in to undermine services that use the Internet in new ways to offer everything from getting a taxi to using self-driving cars to finding a place to stay.


This is exactly why I penned my little manifesto. As Crovitz continues on to note in his essay, new regulatory threats to both existing and emerging technologies are popping up on almost a daily basis. He highlights currently battles over Uber, Airbnb, 23andme, commercial drones, and more. And his previous columns have discussed many other efforts to “permission” innovation and force heavy-handed top-down regulatory schemes on fast-paced and rapidly-evolving sectors and technologies. As he argues:


The hardest thing for government regulators to do is to regulate less, which is why the development of the open-innovation Internet was a rare achievement. The regulation the digital economy needs most now is for permissionless innovation to become the default law of the land, not the exception.


Amen, brother! What we need to do is find more constructive ways to deal with some of the fears that motivate calls for regulation. But, as I noted in my little book, how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. I get very specific about those approaches in Chapter 5 of my book, which is entitled, “Preserving Permissionless Innovation: Principles of Progress.”


So, I hope you’ll download a free copy of the book and take a look. And my sincerest thanks to Gordon Crovitz for featuring it in his excellent new column.


____________________________________


Additional Reading:



I summarized the major themes and conclusions of my book in this Medium essay, “Why Permissionless Innovation Matters” (4/24/14).
A short essay I penned for the R Street Blog, “Bucking the ‘Mother, May I?’ Mentality.
I discussed my book on the “New Books in Technology” podcast (4/4/14).
Konstantinos Komaitis discusses my book in his essay, “Permissionless Innovation: Why It Matters” (4/24/14).

 •  0 comments  •  flag
Share on Twitter
Published on May 07, 2014 20:00

May 5, 2014

Skorup and Thierer paper on TV Regulation

Adam and I recently published a Mercatus research paper titled Video Marketplace Regulation: A Primer on the History of Television Regulation And Current Legislative Proposals, now available on SSRN. I presented the paper at a Silicon Flatirons academic conference last week.


We wrote the paper for a policy audience and students who want succinct information and history about the complex world of television regulation. Television programming is delivered to consumers in several ways, including via cable, satellite, broadcast, IPTV (like Verizon FiOS), and, increasingly, over-the-top broadband services (like Netflix and Amazon Instant Video). Despite their obvious similarities–transmitting movies and shows to a screen–each distribution platform is regulated differently.


The television industry is in the news frequently because of problems exacerbated by the disparate regulatory treatment. The Time Warner Cable-CBS dispute last fall (and TWC’s ensuing loss of customers), the Aereo lawsuit, and the Comcast-TWC proposed merger were each caused at least indirectly by some of the ill-conceived and antiquated TV regulations we describe. Further, TV regulation is a “thicket of regulations,” as the Copyright Office has said, which benefits industry insiders at the expense of most everyone else.


We contend that overregulation of television resulted primarily because past FCCs, and Congress to a lesser extent, wanted to promote several social objectives through a nationwide system of local broadcasters:


1) Localism

2) Universal Service

3) Free (that is, ad-based) television; and

4) Competition


These objectives can’t be accomplished simultaneously without substantial regulatory mandates. Further, these social goals may even contradict each other in some respects.


For decades, public policies constrained TV competitors to accomplish those goals. We recommend instead a reliance on markets and consumer choice through comprehensive reform of television laws, including repeal of compulsory copyright laws, must-carry, retransmission consent, and media concentration rules.


At the very least, our historical review of TV regulations provides an illustrative case study of how regulations accumulate haphazardly over time, demand additional “correction,” and damage dynamic industries. Congress and the FCC focused on attaining particular competitive outcomes through industrial policy, unfortunately. Our paper provides support for market-based competition and regulations that put consumer choice at the forefront.


 •  0 comments  •  flag
Share on Twitter
Published on May 05, 2014 10:24

Book event on Wednesday: A libertarian vision of copyright

Bell-3D-cover-webLast week, the Mercatus Center at George Mason University published the new book by Tom W. Bell, Intellectual Privilege: Copyright, Common Law, and the Common Good, which Eugene Volokh calls “A fascinating, highly readable, and original look at copyright[.]” Richard Epstein says that Bell’s book “makes a distinctive contribution to a field in which fundamental political theory too often takes a back seat to more overt utilitarian calculations.” Some key takeaways from the book:



If copyright were really property, like a house or cell phone, most Americans would belong in jail. That nobody seriously thinks infringement should be fully enforced demonstrates that copyright is not property and that copyright policy is broken.
Under the Founders’ Copyright, as set forth in the 1790 Copyright Act, works could be protected for a maximum of 28 years. Under present law, they can be extended to 120 years. The massive growth of intellectual privilege serves big corporate publishers to the detriment of individual authors and artist.
By discriminating against unoriginal speech, copyright sharply limits our freedoms of expression.

We should return to the wisdom of the Founders and regard copyrights as special privileges narrowly crafted to serve the common good.

This week, on Wednesday, May 7, at noon, the Cato Institute will hold a book forum featuring Bell, and comments by Christopher Newman, Assistant Professor, George Mason University School of Law. It’s going to be a terrific event and you should come. Please make sure to RSVP.


 •  0 comments  •  flag
Share on Twitter
Published on May 05, 2014 08:07

FCC Incentive Auction Plan Won’t Benefit Rural America

The FCC is set to vote later this month on rules for the incentive auction of spectrum licenses in the broadcast television band. These licenses would ordinarily be won by the highest bidders, but not in this auction. The FCC plans to ensure that Sprint and T-Mobile win licenses in the incentive auction even if they aren’t willing to pay the highest price, because it believes that Sprint and T-Mobile will expand their networks to cover rural areas if it sells them licenses at a substantial discount.


This theory is fundamentally flawed. Sprint and T-Mobile won’t substantially expand their footprints into rural areas even if the FCC were to give them spectrum licenses for free. There simply isn’t enough additional revenue potential in rural areas to justify covering them with four or more networks no matter what spectrum is used or how much it costs. It is far more likely that Sprint and T-Mobile will focus their efforts on more profitable urban areas while continuing to rely on FCC roaming rights to use networks built by other carriers in rural areas.


The television band spectrum the FCC plans to auction is at relatively low frequencies that are capable of covering larger areas at lower costs than higher frequency mobile spectrum, which makes the spectrum particularly useful in rural areas. The FCC theorizes that, if Sprint and T-Mobile could obtain additional low frequency spectrum with a substantial government discount, they will pass that discount on to consumers by expanding their wireless coverage in rural areas.


The flaw in this theory is that it considers costs without considering revenue. Sprint and T-Mobile won’t expand coverage in rural areas unless the potential for additional revenue exceeds the costs of providing rural coverage.


study authored by Anna-Maria Kovacs, a scholar at Georgetown University, demonstrates that the potential revenue in rural areas is insufficient to justify substantial rural deployment by Sprint and T-Mobile even at lower frequencies. The study concludes that the revenue potential per square mile in areas that are currently covered by 4 wireless carriers is $41,832. The potential revenue drops to $13,632 per square mile in areas covered by 3 carriers and to $6,219 in areas covered by 2 carriers. The potential revenue in areas covered by 4 carriers is thus approximately 3.5 times greater than in areas covered by 3 carriers and nearly 8 times greater than in areas covered by 2 carriers. It is unlikely that propagation differences between even the lowest and the highest frequency mobile spectrum could reduce costs by a factor greater than three due to path loss and barriers to optimal antenna placement.


Even assuming the low frequency spectrum could lower costs by a factor greater than three, the revenue data in the Kovacs report indicates that additional low frequency spectrum would, at best, support only 1 additional carrier in areas currently covered by 3 carriers. Low frequency spectrum wouldn’t support even one additional carrier in areas that are already covered by 1 or 2 carriers: It would be uneconomic for additional carriers to deploy in those areas at any frequency.


The challenging economics of rural wireless coverage are the primary reason the FCC gave Sprint and T-Mobile a roaming right to use the wireless networks built by Verizon and AT&T even in areas where Sprint and T-Mobile already hold low frequency spectrum.


When the FCC created the automatic roaming right, it exempted carriers from the duty to provide roaming in markets where the requesting carrier already has spectrum rights. (2007 Roaming Order at ¶ 48) The FCC found that, “if a carrier is allowed to ‘piggy-back’ on the network coverage of a competing carrier in the same market, then both carriers lose the incentive to buildout into high cost areas in order to achieve superior network coverage.” (Id. at ¶ 49). The FCC subsequently repealed this spectrum exemption at the urging of Sprint and T-Mobile, because “building another network may be economically infeasible or unrealistic in some geographic portions of [their] licensed service areas.” (2010 Roaming Order at ¶ 23)


As a result, Sprint and T-Mobile have chosen to rely primarily on roaming agreements to provide service in rural areas, because it is cheaper than building their own networks. The most notorious example is Sprint, who actually reduced its rural coverage to cut costs after the FCC eliminated the spectrum exemption to the automatic roaming right. This decision was not driven by Sprint’s lack of access to low frequency spectrum — Sprint has held low frequency spectrum on a nationwide basis for years.


The limited revenue potential offered by rural areas and the superior economic alternative to rural deployment provided by FCC’s automatic roaming right indicate that Sprint and T-Mobile won’t expand their rural footprints at any frequency. Ensuring that Sprint and T-Mobile win low frequency spectrum at a substantial government discount would benefit their bottom lines, but it won’t benefit rural Americans.


 •  0 comments  •  flag
Share on Twitter
Published on May 05, 2014 07:31

May 2, 2014

What Vox Doesn’t Get About the “Battle for the Future of the Internet”

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.


How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.


Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.


How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.


Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.


By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.


CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.


What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.


To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.


It should be noted that what Comcast and Netflix negotiated was not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.


What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?


Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.


Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).


For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.


 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2014 11:56

April 29, 2014

Defining “Technology”

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” I find that frustrating because, if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”


Photo: David HartsteinOf course, it’s not easy. “In fact, technology is a word we use all of the time, and ordinarily it seems to work well enough as a shorthand, catch-all sort of word,” notes the always-insightful Michael Sacasas in his essay “Traditions of Technological Criticism.” “That same sometimes useful quality, however, makes it inadequate and counter-productive in situations that call for more precise terminology,” he says.


Quite right, and for a more detailed and critical discussion of how earlier scholars, historians, and intellectuals have defined or thought about the term “technology,” you’ll want to check out Michael’s other recent essay, “What Are We Talking About When We Talk About Technology?” which preceded the one cited above. We don’t always agree on things — in fact, I am quite certain that most of my comparatively amateurish work must make his blood boil at times! — but you won’t find a more thoughtful technology scholar alive today than Michael Sacasas. If you’re serious about studying technology history and criticism, you should follow his blog and check out his book, The Tourist and The Pilgrim: Essays on Life and Technology in the Digital Age, which is a collection of some of his finest essays.


Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research. I suspect I will add to it in coming months and years, so please feel free to suggest other additions since I would like this to be a useful resource to others.


I figure the easiest thing to do is to just list the definitions by author. There’s no particular order here, although that might change in the future since I could arrange this chronologically and push the inquiry all the way back to how the Greeks thought about the term (the root term techne,” that is). But for now this collection is a bit random and incorporates mostly modern conceptions of “technology” since the term didn’t really gain traction until relatively recent times.


Also, I’ve not bothered critiquing any particular definition or conception of the term, although that may change in the future, too. (I did, however, go after a few modern tech critics briefly in my recent booklet, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” So, you might want to check that out for more on how I feel, as well as my old essays, “What Does It Mean to ‘Have a Conversation’ about a New Technology?” and, “On the Line between Technology Ethics vs. Technology Policy.”)


So, I’ll begin with two straight-forward definitions from the Merriam-Webster and Oxford dictionaries and then bring in the definitions from various historians and critics.



Merriam-Webster Dictionary

Technology (noun):


1)     (a): the practical application of knowledge especially in a particular area; (b): a capability given by the practical application of knowledge


2)      a manner of accomplishing a task especially using technical processes, methods, or knowledge.


3)      the specialized aspects of a particular field of endeavor.



Oxford Dictionary

Technology (noun):


1)      The application of scientific knowledge for practical purposes, especially in industry.


2)      Machinery and devices developed from scientific knowledge.


3)      The branch of knowledge dealing with engineering or applied sciences.


 


Thomas P. Hughes

I have always loved the opening passage from Thomas Hughes’s 2004 book, Human-Built World: How to Think about Technology and Culture:


“Technology is messy and complex. It is difficult to define and to understand. In its variety, it is full of contradictions, laden with human folly, saved by occasional benign deeds, and rich with unintended consequences.” (p. 1) “Defining technology in its complexity,” he continued, “is as difficult as grasping the essence of politics.” (p. 2)


So true! Nonetheless, Hughes went on to offer his own definition of technology as:


“a creativity process involving human ingenuity.” (p. 3)


Interestingly, in another book, American Genesis: A Century of Invention and Technological Enthusiasm, 1870-1970, he offered a somewhat different definition:


“Technology is the effort to organize the world for problem solving so that goods and services can be invented, developed, produced, and used.” (p. 6, 2004 ed., emphasis in original.)


 


W. Brian Arthur

In his 2009 book, The Nature of Technology: What It Is and How It Evolves, W. Brian Arthur sketched out three conceptions of technology.


1)      “The first and most basic one is a technology is a means to fulfill a human purpose. … As a means, a technology may be a method or process or device… Or it may be complicated… Or it may be material… Or it may be nonmaterial. Whichever it is, it is always a means to carry out a human purpose.”


2)      “The second definition is a plural one: technology as an assemblage of practices and components.”


3)      “I will also allow a third meaning. This technology as the entire collection of devices and engineering practices available to a culture.” (p. 28, emphasis in original.)


 


Alfred P. Sloan Foundation / Richard Rhodes

In his 1999 book, Visions of Technology: A Century Of Vital Debate About Machines Systems And The Human World, Pulitizer Prize-winning historian Richard Rhodes assembled a wonderful collection of essays about technology that spanned the entire 20th century. It’s a terrific volume to have on your bookshelf if want a quick overview of how over a hundred leading scholars, critics, historians, scientists, and authors thought about technology and technological advances.


The collection kicked off with a brief preface from the Alfred P. Sloan Foundation (no specific Foundation author was listed) that included one of the most succinct definitions of the term you’ll ever read:


“Technology is the application of science, engineering and industrial organization to create a human-build world.” (p. 19)


Just a few pages later, however, Rhodes notes that is probably too simplistic:


“Ask a friend today to define technology and you might hear words like ‘machines,’ ‘engineering,’ ‘science.’ Most of us aren’t even sure where science leaves off and technology begins. Neither are the experts.”


Again, so true!


 


Joel Mokyr

Lever of Riches: Technological Creativity and Economic Progress(1990) by Joel Mokyr is one of the most readable and enjoyable histories of technology you’ll ever come across. I highly recommend it. [My thanks to my friend William Rinehart for bringing the book to my attention.]  In Lever of Riches, Mokyr defines “technological progress” as follows:


“By technological progress I mean any change in the application of information to the production process in such a way as to increase efficiency, resulting either in the production of a given output with fewer resources (i.e., lower costs), or the production of better or new products.” (p. 6)


 


Edwin Mansfield

You’ll find definitions of both “technology” and “technological change” in Edwin Mansfield’s Technological Change: An Introduction to a Vital Area of Modern Economics (1968, 1971):


“Technology is society’s pool of knowledge regarding the industrial arts. It consists of knowledge used by industry regarding the principles of physical and social phenomena… knowledge regarding the application of these principles to production… and knowledge regarding the day-to-day operations of production…”


“Technological change is the advance of technology, such advance often taking the form of new methods of producing existing products, new designs which enable the production of products with important new characteristics, and new techniques of organization, marketing, and management.” (p. 9-10)


 


Read Bain

In his December 1937 essay in Vol. 2, Issue No. 6 of the American Sociological Review, “Technology and State Government,” Read Bain said:


 “technology includes all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them.” (p. 860)


[My thanks to Jasmine McNealy for bringing this one to my attention.]


 


David M. Kaplan

Found this one thanks to Sacasas. It’s from David M. Kaplan, Ricoeur’s Critical Theory (2003), which I have not yet had the chance to read:


“Technologies are best seen as systems that combine technique and activities with implements and artifacts, within a social context of organization in which the technologies are developed, employed, and administered. They alter patterns of human activity and institutions by making worlds that shape our culture and our environment. If technology consists of not only tools, implements, and artifacts, but also whole networks of social relations that structure, limit, and enable social life, then we can say that a circle exists between humanity and technology, each shaping and affecting the other. Technologies are fashioned to reflect and extend human interests, activities, and social arrangements, which are, in turn, conditioned, structured, and transformed by technological systems.”


I liked Michael’s comment on this beefy definition: “This definitional bloat is a symptom of the technological complexity of modern societies. It is also a consequence of our growing awareness of the significance of what we make.”


 


Jacques Ellul

Jacques Ellul, a French theologian and sociologist, penned a massive, 440-plus page work of technological criticism in 1954, La Technique ou L’enjeu du Siècle (1954), which was later translated in English as, The Technological Society(New York: Vintage Books, 1964). In setting forth his critique of modern technological society, he used the term “technique” repeatedly and contrasted with “technology.” He defined technique as follows:


“The term technique, as I use it, does not mean machines, technology, or this or that procedure for attaining an end. In our technological society, technique is the totality of methods rationally arrived at and having absolute efficiency (for a given state of development) in every field of human activity. […]


Technique is not an isolated fact in society (as the term technology would lead us to believe) but is related to every factor in the life of modern man; it affects social facts as well as all others. Thus technique itself is a sociological phenomenon…” (p. xxvi, emphasis in original.)


 


Bernard Stiegler

In La technique et le temps, 1: La faute d’Épiméthée, or translated, Technics and Time, 1: The Fault of Epimetheus (1998), French philosopher Bernard Stiegler defines technology as:


“the pursuit of life by means other than life”


[I found that one here.]


 



Again, please feel free to suggest additions to this compendium that future students and scholars might find useful. I hope that this can become a resource to them.


Additional Reading:



Eric Schatzberg – “Technik Comes to America: Changing Meanings of Technology,” Technology and Culture (2006)

 


 •  0 comments  •  flag
Share on Twitter
Published on April 29, 2014 06:53

April 27, 2014

New Essays about Permissionless Innovation & Why It Matters

This past week I posted two new essays related to my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” Just thought I would post quick links here.


First, my old colleague Dan Rothschild was kind enough to ask me to contribute a post to the R Street Blog entitled, “Bucking the ‘Mother, May I?’ Mentality.” In it, I offered this definition and defense of permissionless innovation as a policy norm:


Permissionless innovation is about the creativity of the human mind to run wild in its inherent curiosity and inventiveness, even when it disrupts certain cultural norms or economic business models. It is that unhindered freedom to experiment that ushered in many of the remarkable technological advances of modern times. In particular, all the digital devices, systems and networks that we now take for granted came about because innovators were at liberty to let their minds run wild.


Steve Jobs and Apple didn’t need a permit to produce the first iPhone. Jeff Bezos and Amazon didn’t need to ask anyone for the right to create a massive online marketplace. When Sergey Brin and Larry Page wanted to release Google’s innovative search engine into the wild, they didn’t need to get a license first. And Mark Zuckerberg never had to get anyone’s blessing to launch Facebook or let people freely create their own profile pages.


All of these digital tools and services were creatively disruptive technologies that altered the fortunes of existing companies and challenged various social norms. Luckily, however, nothing preemptively stopped that innovation from happening. Today, the world is better off because of it, with more and better information choices than ever before.


I also posted an essay over on Medium entitled, “Why Permissionless Innovation Matters.” It’s a longer essay that seeks to answer the question: Why does economic growth occur in some societies & not in others? I build on the recent comments of venture capitalist Fred Wilson of Union Square Ventures noted during recent testimony: “If you look at the countries around the world where the most innovation happens, you will see a very high, I would argue a direct, correlation between innovation and freedom. They are two sides of the same coin.” I continue on to argue in my essay:


that’s true in both a narrow and broad sense. It’s true in a narrow sense that innovation is tightly correlated with the general freedom to experiment, fail, and learn from it. More broadly, that general freedom to experiment and innovate is highly correlated with human freedom in the aggregate.


Indeed, I argue in my book that we can link an embrace of dynamism and permissionless innovation to the expansion of cultural and economic freedom throughout history. In other words, there is a symbiotic relationship between freedom and progress. In his book, History of the Idea of Progress, Robert Nisbet wrote of those who adhere to “the belief that freedom is necessary to progress, and that the goal of progress, from most distant past to the remote future, is ever-ascending realization of freedom.” That’s generally the ethos that drives the dynamist vision and that also explains why getting the policy incentives right matters so much. Freedom — including the general freedom to engage in technological tinkering, endless experimentation, and acts of social and economic entrepreneurialism — is essential to achieving long-term progress and prosperity.


I also explain how the United States generally got policy right for the Internet and the digital economy in the 1990s by embracing this vision and enshrining it into law in various ways. I conclude by noting that:


If we hope to encourage the continued development of even more “technologies of freedom,” and enjoy the many benefits they provide, we must make sure that, to the maximum extent possible, the default position toward new forms of technological innovation remains “innovation allowed.” Permissionless innovation should, as a general rule, trump precautionary principle thinking. The burden of proof rests on those who favor precautionary policy prescriptions to explain why ongoing experimentation with new ways of doing things should be prevented preemptively.


Again, read the entire thing over at Medium. Also, over at Circle ID this week, Konstantinos Komaitis published a related essay, “Permissionless Innovation: Why It Matters,” in which he argued that “Permissionless innovation is key to the Internet’s continued development. We should preserve it and not question it.” He was kind enough to quote my book in that essay. I encourage you to check out his piece.


 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2014 15:11

April 25, 2014

NETmundial wrap-up

NETmundial is over; here’s how it went down. Previous installments (1, 2, 3).



The final output of the meeting is available here. It is being referred to as the Multistakeholder Statement of São Paulo. I think the name is designed to put the document in contention with the Tunis Agenda. Insofar as it displaces the Tunis Agenda, that is fine with me.
Most of the civil society participants are not happy. Contrary to my prediction, in a terrible PR move, the US government (among others) weakened the language on surveillance. A statement on net neutrality also did not make it into the final draft. These were the top two issues for most of civil society participants.
I of course oppose US surveillance, but I am not too upset about the watered down language since I don’t see this as an Internet governance issue. Also, unlike virtually all of the civil society people, I oppose net neutrality laws, so I’m pleased with that aspect of the document.
What bothers me most in the final output are two statements that seem to have been snuck in at the last moment by the drafters without approval from others. These are real shenanigans. The first is on multistakeholderism. The Tunis language said that stakeholders should participate according to their “respective roles and responsibilities.” The original draft of the NETmundial document used the same language, but participants agreed to remove it, indicating that all stakeholders should participate equally and that no stakeholders were more special than others. Somehow the final document contained the sentence, “The respective roles and responsibilities of stakeholders should be interpreted in a flexible manner with reference to the issue under discussion.” I have no idea how it got in there. I was in the room when the final draft was approved, and that text was not announced.
Similarly, language in the “roadmap” portion of the document now refers to non-state actors in the context of surveillance. “Collection and processing of personal data by state and non-state actors should be conducted in accordance with international human rights law.” The addition of non-state actors was also done without consulting anyone in the final drafting room.
Aside from the surveillance issue, the other big mistake by the US government was their demand to weaken the provision on intermediary liability. As I understand it, their argument was that they didn’t want to consider safe harbor for intermediaries without a concomitant recognition of the role of intermediaries in self-policing, as is done through the notice-and-takedown process in the US. I would have preferred a strong, free-standing statement on intermediary liability, but instead, the text was replaced with OECD language that the US had previously agreed to.
Overall, the meeting was highly imperfect—it was non-transparent, disorganized, inefficient in its use of time, and so on. I don’t think it was a rousing success, but it was nevertheless successful enough that the organizers were able to claim success, which I think was their original goal. Other than the two last-minute additions that I saw (I wonder if there are others), nothing in the document gives me major heartburn, so maybe that is actually a success. It will be interesting to see if the São Paulo Statement is cited in other fora, and if they decide to repeat this process again next year.

 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2014 05:58

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.