Adam Thierer's Blog, page 36
February 16, 2016
Dan Brenner: An Appreciation
I was shocked and saddened to hear tonight that L.A. Superior Court Judge Dan Brenner was struck and killed in Los Angeles yesterday. I am just sick about it. He was a great man and good friend.
Dan was an outstanding legal mind who, before moving back out to California to become a judge in 2012, made a big impact here in DC while serving as a legal advisor to FCC chairman Mark Fowler in the 1980s. He went on to have a distinguished career as head of legal affairs at the National Cable & Telecommunications Association. He also served as an adjunct law professor in major law schools and wrote important essays and textbooks on media and broadband law.
More than all that, Dan Brenner was a dear friend to a great many people, and he was always the guy with the biggest smile on his face in any room he walked into. Dan had an absolutely infectious spirit; his amazing wit and wisdom inspired everyone around him. I never heard a single person say a bad word about Dan Brenner. Even people on the opposite side of any negotiating table from him respected and admired him. That’s pretty damn rare in a town like Washington, DC.
And Dan was a great friend to me. We met in the early 1990s when I was still just a pup in telecom and media circles. I was wet behind the ears, but Dan always had time for my silly questions. He helped bring me up to speed on a great many issues. But more than that, he always inquired about how I was doing personally. He took an active interest in getting to know people at more than just a professional level.
We became good friends and saw each other regularly after that, so much so that Dan once asked me to read the first draft of an autobiography he began writing! It was filled with hilarious stories about a life in the television policy world. True story: Dan began his book with a story about the amazing (and sometime scary) power of television. He noted that one of his first memories in life was watching an episode of Adventures of Superman in the 1950s and then also seeing a commercial for Clorox during the show. He said at that point for some reason he thought he might be able to become Superman and fly if he took a swig of Clorox from his mom’s laundry room! He did it and ended up in the emergency room. His punch line to the story: “I should have known the broadcasters had it in for me from a young age!” That was the kind of sense of humor Dan had. He always had a line. He made the sometimes stodgy world of telecom policy damn fun.
I sought out Dan’s advice on a great many things through the years and he was always a comforting voice of patience and wisdom. Much later in our friendship, after he had left NCTA and moved on to be a private counsel in Hogan Lovells, I called Dan with a big problem in 2010. The organization I was running at the time (Progress & Freedom Foundation) was failing and the writing was on the wall that our days were numbered. I was in the absolutely miserable position of having to shut down an organization I cared about deeply but I wanted to do so with dignity. But to do so, I needed some legal help to handle that transition and take care all the organization’s remaining obligations. Dan offered to do it all pro bono. I didn’t even have to ask. He just came right out and said he was going to do it for me and the few remaining staff and board members who stuck around to help me out. He took every call we made to him and helped handle even the smallest details as I steered that ship to the bottom of the ocean.
Dan’s advice to me at that time was indispensable. But the legal advice wasn’t the most important thing he offered me at the time. He knew I was miserable. It was an absolutely horrible moment in my life and Dan could see it. He took an active role in cheering me up and reminding me that brighter days would be ahead. He insisted that after the whole ordeal was over that I should keep a small amount of the remaining money to pay myself a final check (I didn’t get paid at all for the final months on the job) and take a long vacation. I didn’t do it, but he kept insisting how important it was to take care of myself and keep my spirits up.
After a few awful months of dealing with the building landlord and various bill collectors, my life was in shambles, but Dan stuck through with me till the end and helped keep me sane. I took him out for a huge steak lunch one day after it was all finally over and told him that never in my life had someone been so gracious and generous with their time and attention as he was during that extraordinarily difficult period in my life. He just flashed his classic big smile and said, “Ah, it was really nothin’, Adam.”
But it was. But it was. It meant everything to me at the time and I consider myself blessed to have had a friend like Dan Brenner who was always there for me and others like that.
You will be missed, my friend.
February 8, 2016
20 Years Later: Congress Didn’t See the Internet Coming
This article originally appeared at techfreedom.org.
Twenty years ago today, President Clinton signed the Telecommunications Act of 1996. John Podesta, his chief of staff immediately saw the problem: “Aside from hooking up schools and libraries, and with the rather major exception of censorship, Congress simply legislated as if the Net were not there.”
Here’s our take on what Congress got right (some key things), what it got wrong (most things), and what an update to the key laws that regulate the Internet should look like. The short version is:
End FCC censorship of “indecency”
Focus on promoting competition
Focus regulation on consumers rather than arbitrary technological silos or political whim
Get the FCC out of the business of helping government surveillance
Trying, and Failing, to Censor the Net
Good: The Act is most famous for Section 230, which made Facebook and Twitter possible. Without 230, such platforms would have been held liable for the speech of their users — just as newspapers are liable for letters to the editor. Trying to screen user content would simply have been impossible. Sharing user-generated content (UGC) on sites like YouTube and social networks would’ve been tightly controlled or simply might never have taken off. Without Section 230, we might all still be locked in to AOL!
Bad: Still, the Act was very much driven by a technopanic over “protecting the children.”
Internet Censorship. 230 was married to a draconian crackdown on Internet indecency. Aimed at keeping pornography away from minors, the rest of the Communications Decency Act — rolled into the Telecom Act — would have required age verification of all users, not just on porn sites, but probably any UGC site, too. Fortunately, the Supreme Court struck this down as a ban on anonymous speech online.
Broadcast Censorship. Unfortunately, the FCC is still in the censorship business for traditional broadcasting. The 1996 Act did nothing to check the agency’s broad powers to decide how long a glimpse of a butt or a nipple is too much for Americans’ sensitive eyes.
Unleashing Competition—Slowly
Good: Congress unleashed over $1.3 trillion in private broadband investment, pitting telephone companies and cable companies against each other in a race to serve consumers — for voice, video andbroadband service.
Legalizing Telco Video. In 1984, Congress had (mostly) prohibited telcos from providing video service — largely on the assumption that it was a monopoly. Congress reversed that, which eventually meant telcos had the incentive to invest in networks that could carry video — and super-fast broadband.
Breaking Local Monopolies. Congress also barred localities from blocking new entry by denying a video “franchise.”
Encouraging Cable Investment. The 1992 Cable Act had briefly imposed price regulation on basic cable packages. This proved so disastrous that the Democratic FCC retreated — but only after killing a cycle of investment and upgrades, delaying cable modem service by years. In 1996, Congress finally put a stake through the heart of such rate regulation, removing investment-killing uncertainty.
Bad: While the Act laid the foundations for what became facilities-based network competition, its immediate focus was pathetically short-sighted: trying to engineer artificial competition for telephone service.
Unbundling Mandates. The Act created an elaborate set of requirements that telephone companies “unbundle” parts of their networks so that resellers could use them, at sweetheart prices, to provide “competitive” service. The FCC then spent the next nine years fighting over how to set these rates.
Failure of Vision. Meanwhile, competing networks provided fierce competition: cable providers gained over half the telephony market with a VoIP service, and 47% of customers have simply cut the cord — switching entirely to wireless. Though the FCC refuses to recognize it, broadband is becoming more competitive, too: 2014 saw telcos invest in massive upgrades, bringing 25–75 Mbps speeds to more than half the country by pushing fiber closer to homes. The cable-telco horse race is fiercer than ever — and Google Fiber has expanded its deployment of a third pipe to the home, while cable companies are upgrading to provide gigabit-plus speeds and wireless broadband has become a real alternative for rural America.
Delaying Fiber. The greatest cost of the FCC’s unbundling shenanigans was delaying the major investments telcos needed to keep up with cable. Not until 2003 did the FCC make clear that it would not impose unbundling mandates on fiber — which pushed Verizon to begin planning its FiOS fiber-to-the-home network. The other crucial step came in 2006, when the Commission finally clamped down on localities that demanded lavish ransoms for allowing the deployment of new networks, which stifled competition.
Regulation
Good: With the notable exception of unbundling mandates, the Act was broadly deregulatory.
General thrust. Congress could hardly have been more clear: “It is the policy of the United States… to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
Ongoing Review & Deregulation. Congress gave the FCC broad discretion to ratchet down regulation to promote competition.
Bad: The Clinton Administration realized that technological change was rapidly erasing the lines separating different markets, and had proposed a more technology-neutral approach in 1993. But Congress rejected that approach. The Act continued to regulate by dividing technologies into silos: broadcasting (Title III), telephone (Title II) and cable (Title VI). Title I became a catch-all for everything else. Crucially, Congress didn’t draw a clear line between Title I and Title II, setting in motion a high-stakes fight that continues today.
Away from Regulatory Silos. Bill Kennard, Clinton’s FCC Chairman, quickly saw just how obsolete the Act was. His 1999 Strategic Plan remains a roadmap for FCC reform.
Away from Title II. Kennard also indicated that he favored placing all broadband in Title I — mainly because he understood that Title II was designed for a monopoly and would tend to perpetuate it. Vibrant competition between telcos and cable companies could happen only under Title I. But it was the Bush FCC that made this official, classifying cable modem as Title I in 2002 and telco DSL in 2005.
Net Neutrality Confusion. The FCC spent a decade trying to figure out how to regulate net neutrality, losing in court twice, distracting the agency from higher priorities — like promoting broadband deployment and adoption — and making telecom policy, once an area of non-partisan pragmatism, a fiercely partisan ideological cesspool.
Back to Title II. In 2015, the FCC reclassified broadband under Title II — not because it didn’t have other legal options for regulating net neutrality, but because President Obama said it should. He made the issue part of his re-assertion of authority after Democrats lost the 2014 midterm elections. Net neutrality and Title II became synonymous, even though they have little to do with each other. Now, the FCC’s back in court for the third time.
Inventing a New Act. Unless the courts stop it, the FCC will exploit the ambiguities of the ‘96 Act to essentially write a new Act out of thin air: regulating way up with Title II, using its forbearance powers to temporarily suspend politically toxic parts of the Act (like unbundling), and inventing wholly new rules that give the FCC maximum discretion—while claiming the power to do anything that somehow promotes broadband. The FCC calls this all “modernization” but it’s really a staggering power grab that allows the FCC to control the Internet in the murkiest way possible.
Bottom line: The 1996 Act gives the FCC broad authority to regulate in the “public interest,” without effectively requiring the FCC to gauge the competitive effects of what it does. The agency’s stuck in a kind of Groundhog Day of over-regulation, constantly over-doing it without ever learning from its mistakes.
Time for a #CommActUpdate
Censorship. The FCC continues to censor dirty words and even brief glimpses of skin on television because of a 1978 decision that assumes parents are helpless to control their kids’ media consumption. Today, parental control tools make this assumption obsolete: parents can easily block programming marked as inappropriate. Congress should require the FCC to focus on outright obscenity — and let parents choose for themselves.
Competition. If the 1996 Act served to allow two competing networks, a rewrite should focus on driving even fiercer cable-telco competition, encouraging Google Fiber and others to build a third pipe to the home, and making wireless an even stronger competitor.
Title II. If you wanted to protect cable companies from competition, you couldn’t find a better way to do it than Title II. Closing that Pandora’s Box forever will encourage companies like Google Fiber to enter the market. But Congress needs to finish what the 1996 Act started: it’s not enough to stop localities from denying franchises video service (and thus broadband, too).
Local Barriers. Congress should crack down on the moronic local practices that have made deployment of new networks prohibitive — learning from the success of Google Fiber cities, which have cut red tape, lowered fees and generally gotten out of the way. Pending bipartisan legislationwould make these changes for federal assets, and require federal highway projects to include Dig Once conduits to make fiber deployment easier. That’s particularly helpful for rural areas, which the FCC has ignored, but making deployment easier inside cities will require making municipal rights of way easier to use. Instead of rushing to build their own broadband networks, localities should have to first at least try to stimulate private deployment.
Regulation. Technological silos made little sense in 1993. Today, they’re completely obsolete.
Unchecked Discretion. The FCC’s right about one thing: rigid rules don’t make sense either, given how fast technology is changing. But giving the FCC sweeping discretion is even more dangerous: it makes regulating the Internet inherently political, subject to presidential whim and highly sensitive to elections.
The Fix. There’s a simple solution: write clear standards that let the FCC work across all communications technologies, but that require the FCC to prove that its tinkering actually makes consumers better off. As long as the FCC can do whatever it claims is in the “public interest,” the Internet will never be safe.
Rethinking the FCC. Indeed, Congress should seriously consider breaking up the FCC, transferring its consumer protection functions to the Federal Trade Commission and its spectrum functions to the Commerce Department.
Encryption. Since 1994, the FCC has had the power to require “telecommunications services” to be wiretap-ready — and the discretion to decide how to interpret that term. Today, the FBI is pushing for a ban on end-to-end encryption — so law enforcement can get backdoor access into services like Snapchat. Unfortunately, foreign governments and malicious hackers could use those backdoors, too. Congress is stalling, but the FCC could give law enforcement exactly what it wants — using the same legal arguments it used to reclassify mobile broadband under Title II. Law enforcement is probably already using this possibility to pressure Internet companies against adopting secure encryption. Congress should stop the FCC from requiring back doors.
February 3, 2016
Congress Must Protect Online Speech from Frivolous Lawsuits
This article originally appeared at techfreedom.org.
Today, TechFreedom and a coalition of free-market groups urged Congress to protect Americans against malicious or frivolous litigation that threatens to stifle free speech and undermine the digital economy. In a letter to the House Judiciary Committee, the coalition called for passage of H.R. 2304, the SPEAK FREE Act, which would give defendants across the nation access to a special motion to dismiss SLAPPs (strategic lawsuits against public participation). The bill would also empower courts to shift fees, so that defendants who prevail on an anti-SLAPP motion would not have to face legal costs.
The coalition letter reads:
Each year, a multitude of Americans fall victim to lawsuits called SLAPPs (strategic lawsuits against public participation) that are aimed at unfairly intimidating and silencing them. These kinds of lawsuits are highly effective, despite being without merit, since the legal costs, invasion of privacy, and hassle associated with fighting them is rarely considered a worthwhile use of individuals’ time.
“SLAPPs threaten online free speech and the business models that thrive on consumer reviews,” said Tom Struble, Policy Counsel at TechFreedom. “Without an easy judicial mechanism to dismiss groundless lawsuits and shift fees, consumers and small businesses often have no choice but to relent to the demands of companies with deeper pockets. 28 states have already adopted anti-SLAPP standards — it’s time for Congress to do the same.”
###
We can be reached for comment at media@techfreedom.org.
February 2, 2016
The FCC Targets Cable Set-Top Boxes—Why Now?
With great fanfare, FCC Chairman Thomas Wheeler is calling for sweeping changes to the way cable TV set-top boxes work.
In an essay published Jan. 27 by Re/Code, Wheeler began by citing the high prices consumers pay for set-top box rentals, and bemoans the fact that alternatives are not easily available. Yet for all the talk and tweets about pricing and consumer lock-in, Wheeler did not propose an inquiry into set-top box profit margins, nor whether the supply chain is unduly controlled by the cable companies. Neither did Wheeler propose an investigation into the complaints consumers have made about cable companies’ hassles around CableCards, which under FCC mandate cable companies must provide to customers who buy their own set-top boxes.
In fact, he dropped the pricing issue halfway through and began discussing access to streaming content:
To receive streaming Internet video, it is necessary to have a smart TV, or to watch it on a tablet or laptop computer that, similarly, do not have access to the channels and content that pay-TV subscribers pay for. The result is multiple devices and controllers, constrained program choice and higher costs.
This statement seems intentionally misleading. Roku, Apple TV and Amazon Fire sell boxes that connect to TVs and allow a huge amount of streaming content to play. True, the devices are still independent of the set-top cable box but there is no evidence that this lack of integration is a competitive barrier.
A new generation of devices, called media home gateways (MHGs), is poised to provide this integration, as well as manage other media-based cloud services on behalf of consumers. This is where Wheeler’s proposal should be worrisome. He writes:
The new rules would create a framework for providing device manufacturers, software developers and others the information they need to introduce innovative new technologies, while at the same time maintaining strong security, copyright and consumer protections.
This sounds much more like a plan to dictate operating systems, user interfaces and other hardware and software standards for equipment that until now has been unregulated. Wheeler gives no explanation as to how his proposal will lead to lower prices or development of a direct-to-consumer sales channel.
[M]y proposal will pave the way for a competitive marketplace for alternate navigation devices, and could even end the need for multiple remote controls, allowing you to use one for all of the video sources you use.
What Wheeler really wants is FCC management of the transition from today’s set-top boxes to the media home gateways (MHGs) just beginning to appear on the market—a foray into customer premises equipment regulation unseen since the 1960s.
For good reason, the words “media home gateway” never appear in Wheeler’s Re/Code article. By avoiding mention of MHGs, he can play his “lack of competition” card, as he did in Thursday’s press briefing on his proposal.
There’s more than a whiff of misdirection here. Set-top boxes are a maturing market. An October 2015 TechNavio report forecasts the shipment volume of the global set-top box market to decline at a compound annual rate of 1.34 % over the period 2014-2019. By revenue, the market is expected to decline at a compound annual rate 1.36% during the forecast period. When consumers “cut the cable cord,” as some 21 million have, it’s set-top boxes that get unplugged.
At the same time, TechNavio forecasts the global MHG market to grow at a compound annual rate of 7.82% over the same period. Elsewhere, SNL Kagan’s Multimedia Research Group forecasts MHG shipments will exceed 24 million in 2017, up from 7.7 million in 2012. The long list of MHG manufacturers includes ActionTec, Arris, Ceva, Huawei, Humax, Samsung and Technicolor.
MHGs are the “alternative navigation devices” Wheeler coyly refers to in his Re/Code essay. These devices will replace the set-top boxes in use today, but because of their ability to handle Internet streaming, they are likely to be available through more than one channel. That’s why they only way to view Wheeler’s call to “unlock the set-top box” is as a pre-emptive move to extend the FCC’s regulation into the delivery of streaming media.
To be sure, if the FCC mandates integration of streaming options into cable-provided MHGs, streaming companies would gain stronger foothold into consumers’ homes, which would then allow them to share their apps, gather data on users, and, perhaps most lucratively of all, control the interface on which channels are displayed, as noted by The Verge’s Ashley Carman.
Yet the streaming companies that would appear to benefit most from this proposal have thus far been quiet. Perhaps because Wheeler has made no secret that he believes Apple TV, Amazon Fire and Roku are multichannel video programming distributors (MVPDs), FCC-speak for “local cable companies.” Is his “unlock the box” plan precisely the opposite? Is it an effort to fold streaming aggregators into the existing cable TV regulatory platform, with all its myriad rules, regulations, legal obligations and—dare we say it—fees and surcharges? You might roll your eyes, but this is the only analysis in which the proposal, which focuses on “device manufacturers, software developers and others,” makes sense.
But does the FCC have the right to require cable companies to share customer data acquired through the infrastructure and software they built and own? It’s yet another iteration of the old unbundled network elements model that is consistently shot down by the courts yet one that the FCC can’t seem to get past.
Arcane details aside, the FCC should not be involved in directing evolution paths, operating software or other product features. It creates too much opportunity for lobbying and rent-seeking. History shows that when the government gets granularly involved in promoting technology direction, costs go up and innovation suffers as capital is diverted into politically-favored choices where it ends up wasted. The debacles with the Chevy Volt and Solera are just two recent examples of the dangers inherent when bureaucrats try to pick winners, or give a subset of companies in one industry an assist and the expense of others.
This post originally appeared Feb. 1, 2016 on the R Street Institute official blog.
January 21, 2016
Congress Must Pass Permanent Ban on Internet Access Taxes
This article originally appeared at techfreedom.org.
Today, TechFreedom joined a coalition of over 40 organizations from across the country in urging Senate leadership to permanently ban taxes on Internet access. In a letter to Majority Leader Mitch McConnell and Minority Leader Harry Reid, the coalition voiced support for a permanent extension of the Internet Tax Freedom Act (ITFA), which bans states and localities from imposing Internet access taxes and discriminatory taxes on electronic commerce. The bill is currently embedded in H.R. 644, the Trade Facilitation and Enforcement Act.
The letter reads:
After decades of progress in connecting more Americans to the Internet, the lack of a permanent ban on Internet access taxes could reverse this progress. Numerous studies continue to show that cost remains an obstacle to Internet access and, if taxes on the Internet go up, even fewer people will be able to afford to go online. This would impede our nation’s long held goal of universal Internet access.
“Americans’ broadband bills shouldn’t be used as bargaining chips by Senators who want to impose online sales taxes,” said Tom Struble, Policy Counsel at TechFreedom. “For 17 years, the Internet access tax ban has helped encourage broadband adoption and investment. If Senators want an online sales tax, then pass it on the merits — but handcuffing a broadband tax with sales tax is irresponsible. Consumers are already facing the prospect of higher bills, as the FCC is likely to soon impose universal service fees on broadband as part of its Title II regime imposed in the name of ‘net neutrality.’ Let’s not make that problem worse. The Senate should act quickly to end the uncertainty and pass permanent, Internet tax freedom.”
Sling and Cable Cutting in 2016: Is the Technology There Yet?
People are excited about online TV getting big in 2016. Alon Maor of Qwilt predicts in Multichannel News that this will be “the year of the skinny bundle.” Wired echoes that sentiment. The Wall Street Journal’s Geoffrey A. Fowler said, “it’s no longer the technology that holds back cable cutting–it’s the lawyers.”
Well, I’m here to say, lawyers can’t take all the blame. In my experience, it’s the technology, too. Some of the problem is that most discussion about the future of online TV and cable cutting fails to distinguish streaming video-on-demand (SVOD) and streaming linear TV (“linear” means continuous pre-programmed and live “channels”, often with commercials, much like traditional cable).
SVOD includes Netflix, HBO Go, and Hulu. Yes, SVOD technology is very good. The major SVOD players spend millions on networks and caching so that their (static) content is as close to consumers as possible.
But streaming linear TV–like Sling and live sports–cannot be cached like static content and appears to have kinks to iron out before mass-market adoption. Read online video analysts and you realize that streaming linear and live TV online is an entirely different animal from SVOD. In August, Ben Popper at the Verge had a fascinating longform piece about the MLB’s streaming TV operation, Major League Baseball Advanced Media (BAM), which is the industry leader doing live streaming video. (HBO hired BAM for streaming the latest Game of Thrones season after HBO Go’s in-house streaming offering suffered from outages.)
Linear online TV is hard! As one BAM executive explains vividly,
What people forget is that the internet, as a technology was never designed to do something like this–deliver flawless video simultaneously to millions of people. I liken it to trying to live on Mercury. The planet is completely inhospitable. Every day all you’re doing is [fighting] a battle for survival in a place that really does not want you.
So I say this understanding the significant technical difficulties they face: in my experience, streaming linear TV is not ready for prime-time yet. (If I could find information about how firms do it, I would also distinguish pre-recorded linear TV–like Sling’s HGTV channel–from live online TV, because firms likely use different network topologies. Another time, perhaps.) Perhaps I’m an outlier, but media reports about Sling outages suggest that I’m not. I used Sling TV for the past few months and had high hopes but I just dropped my subscription. My test for acceptable streaming quality is: “Would I invite friends over to watch something using this streaming service?” Most SVOD services pass that test. Through caching and streaming protocols, SVOD operators can assure pretty consistent streaming even during times of moderate congestion.
Since linear TV typically has programming that is transmitted (nearly) live, operators can’t really do the distributed caching of content that makes SVOD function well. As a result, streaming linear TV like Sling TV and WatchESPN (a Sling subscription gives you access to ESPN’s online sports portal) currently do not pass my test. Frankly, the streaming quality varied from excellent to unwatchable. Punching into the app and casting it to my TV, I was never sure which Sling would show up that day: Good Sling or Bad Sling. On good days, the Monday Night Football game looked better than cable TV. Other days, the stream would, for instance, work well and then fail at every commercial break (perhaps the commercials were stored on another, overwhelmed server?).
Now, it’s impossible to know for sure where the network bottleneck is and why a video stream is stuttering. To be more certain that the problem was with Sling TV or WatchESPN and not, say, my Chromecast, my Wifi, or my ISP, every time Sling or WatchESPN had several severe streaming problems in a period of short time, I did an unscientific test. I would close down the Sling app, open up Netflix, and start streaming Netflix via my Chromecast. Without exception for the past three months, Netflix loaded quickly and streamed well. Keeping everything the same except the source of content suggests (but doesn’t prove) that the problem was not a local network or device problem.
There are ways of using network architecture and protocols to improve linear TV on broadband, typically with dedicated servers and last-mile bandwidth reservation. Comcast Stream TV is an example and LTE Broadcast might someday soon provide linear and live TV for mobile customers. But given so-called net neutrality rules, these services are controversial and regulated. ISPs can have their own proprietary TV service like Stream TV but, given net neutrality hysteria, probably won’t offer dedicated bandwidth to distributors like Sling. In this narrow area, the FCC’s rules are a pretty good deal for larger, vertically integrated firms that can put programming bundles together. It’s not so great for the small ISPs and WISPs who want to respond to cable cutter demands and offer a quality TV product from another company via broadband.
So I expect linear online TV to remain niche until the quality improves. A big draw of Sling is the cancel-at-any-time policy which lowers the risk if you’re dissatisfied with programming and allows single-season sports fans like me to subscribe for a season or two. I subscribed in the fall and winter to watch NCAA and NFL football. Sling recognizes that a lot of its subscribers are like me. But if Sling and other linear TV programmers want to expand beyond niche, they’ll need higher-quality streams. (And Sling might wish to remain niche so they don’t upset their programmers by cannibalizing traditional subscription TV.) Would I sign up for Sling again? Sure. Maybe the prognosticators are right, and the technology will develop rapidly in 2016. I have my doubts.
January 13, 2016
Global Leaders Must Support Strong Encryption
This article was originally posted on techfreedom.org
On January 11, TechFreedom joined nearly 200 organizations, companies, and experts from more than 40 countries in urging world leaders to support strong encryption and to reject any law, policy, or mandate that would undermine digital security. In France, India, the U.K, China, the U.S., and beyond, governments are considering legislation and other proposals that would undermine strong encryption. The letter is now open to public support and is hosted at https://www.SecureTheInternet.org.
The letter concludes:
Strong encryption and the secure tools and systems that rely on it are critical to improving cybersecurity, fostering the digital economy, and protecting users. Our continued ability to leverage the internet for global growth and prosperity and as a tool for organizers and activists requires the ability and the right to communicate privately and securely through trustworthy networks.
“There’s no middle ground on encryption,” said Tom Struble, Policy Counsel at TechFreedom. “You either have encryption or you don’t. Any vulnerability imposed for government use can be exploited by those who seek to do harm. Privacy in communications means governments must not ban or restrict access to encryption, or mandate or otherwise pressure companies to implement backdoors or other security vulnerabilities into their products.”
January 12, 2016
Realities of Zero Rating and Internet Streaming Will Confront the FCC in 2016
For tech policy progressives, 2015 was a great year. After a decade of campaigning, network neutrality advocates finally got the Federal Communications Commission to codify regulations that require Internet service providers to treat all traffic the same as it crosses the network and is delivered to customers.
Yet the rapid way broadband business models, always tenuous to begin with, are being overhauled, may throw some damp linens on their party. More powerful smart phones, the huge uptick in Internet streaming and improved WiFi technology are just three factors driving this shift.
As regulatory mechanisms lag market trends in general, they can’t help but be upended along with the industry they aim to govern. Looking ahead to the coming year, the consequences of 2015’s regulatory activism will create some difficult situations for the FCC.
Zero rating will clash with net neutrality
The FCC biggest question will be whether “zero rating,” also known as “toll-free data,” is permissible under its new Open Internet rules. Network neutrality prohibits an ISP from favoring one provider’s content over another’s. Yet by definition, that’s what zero rating does: an ISP agrees not to count data generated by a specific content provider against a customer’s overall bandwidth cap. Looking at from another angle, instead of charging more for enhanced quality—the Internet “toll road” network neutrality is designed to prevent, zero rating offers a discount for downgraded transmission. As ISPs, particularly bandwidth-constrained wireless companies, replace “all-you-can-eat” data with tiered pricing plans that place a monthly limit on total data used—and assess additional charges on consumers who go beyond the cap—zero rating agreements become critical in allowing companies like Alphabet (formerly Google), Facebook and Netflix, companies that were among the most vocal supports of network neutrality, to keep users regularly engaged.
T-Mobile has been aggressive with zero rating, having reached agreements with Netflix, Hulu, HBO Now, and SlingTV for its Binge On feature. Facebook, another network neutrality advocate, has begun lobbying for zero rating exceptions outside the U.S. Facebook founder and CEO Mark Zuckerberg told a tech audience in India, where net neutrality has been a long-standing rule, that zero rating is not a violation, a contention that some tech bloggers immediately challenged.
When it came to net neutrality rulings, the FCC may have hoped it would only have to deal with disputes dealing with the technical sausage-making covered by the “reasonable network management” clause in the Title II order (to be fair, zero rating involves some data optimization). But any ruling that permits zero rating would collapse its entire case for network neutrality. The Electronic Frontier Foundation, another vocal net neutrality supporter, understands this explicitly, and wants the FCC to nip zero rating in the bud.
The problem is that zero-rating is not anti-consumer, but a healthy, market-based response to bandwidth limitations. Even though ISPs are treating data differently, customers get access to more entertainment and content without higher costs. Bottom line: consumers get more for their money. For providers like Alphabet and Facebook, which rely on advertising, there stands to be substantial return on investment. Unlike blanket regulation, it’s voluntary, sensitive to market shifts and not coercive.
How long before these companies who lobbied for network neutrality begin their semantic gymnastics to demand exemptions for zero rating? The Court of Appeals may make it moot by overturning Title II reclassification outright. But failing that, expect some of the big Silicon Valley tech companies to start their rhetorical games soon.
Internet streaming will confound the FCC
The zero rating controversy is just one more outgrowth of the rise in Internet streaming.
For the past seven years, the FCC’s regulatory policy has been based on the questionable assertion that cable and phone companies are monopoly bottlenecks.
Title II reclassification is aimed at preventing ISPs from using these perceived bottlenecks to extract higher costs from content providers. Yet at the same time, the FCC, in keeping with its cable/telco/ISPs-are-monopolies mindset, depends on them to fund its universal service and e-rate funds and fulfill its public interest mandate by carrying broadcast feeds from local television stations.
The simple fact is that the local telephone, cable and ISP bundlers are not monopolies. The 463,000 subscribers the top 8 cable companies lost in the second quarter of 2015 are getting their TV entertainment from somewhere. Those who are not cutting the cord completely are reducing their service: Another study estimated that 45 percent of U.S. households reduced the level of cable or satellite service in 2014.
Consumers are replacing their cable bundle with streaming options such as Roku, Amazon Fire, Apple TV and Google Play. These companies aggregate and optimize the Internet video for big screen TVs and home entertainment centers. Broadcast and basic cable programs are usually free (but carry ads); other programming can be purchased by subscription (Netflix, HBO Now) or on demand (iTunes, Amazon). While in many cases consumers retain their broadband connection, that remains their only purchase from the cable or telephone company. But even that might be optional, too. Millennial consumers are comfortable using free WiFi services or zero-rated wireless plans like T-Mobile’s Binge On.
But as consumers cut the cord, cable revenues go down. When cable revenues drop, so does the funding for all those FCC pet causes. The question is how hard will the FCC push to require streaming services to pay universal service fees, or include local TV feeds among their channel offerings? Under the current law, the FCC has no regulatory jurisdiction over streaming applications, unless, as with Title II, it tries to play fast and loose with legal definitions. The FCC has never been shy about overreaching, and as early as October 2014 Chairman Tom Wheeler suggested that IP video aggregators could be considered multichannel video programming distributors, a term that to date has been applied only to cable television companies.
Ironically, streaming stands to meet two long-held progressive policy goals—a la carte programming selection and structural separation of the companies that build and manage physical broadband networks and the companies that provide the applications that ride it. Cable and Internet bundles are so 2012! Yet 2016 finds the FCC is woefully unprepared for this shift. In fact, last we looked was encouraging small towns to borrow millions of dollars to get onto the cable TV business.
Over the past seven years, the FCC has pursued Internet regulations from an ideological perspective—treating it as a necessary component of the overall business ecosystem. In truth, regulation is supposed to serve consumer interests, and should be applied to address extant problems, not as precautionary measures. Unfortunately, the FCC has chosen to ignore market realities and apply rules that fit its own deliberate misperceptions. The Commission’s looming inability to find consistency in enforcing its own edicts is a problem solely of its own making.
August 10, 2015
The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA
I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.
But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.
The Costs of Control
But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc. Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.
I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration.
A Hypothetical Regulatory Scenario
Here’s an interesting case study to consider in this regard: Can 3D printing of prosthetics be controlled? Clearly prosthetics are medical devices in the traditional regulatory sense, but few people are going to the FDA and asking for permission or a “right to try” new 3D-printed limbs. They’re just doing it. And the results have been incredibly exciting, as my Mercatus Center colleague Robert Graboyes has noted.
But let’s imagine what the regulators might do if they really wanted to impose their will and limit the right to try in this context:
Could government officials ban 3D printers outright? I don’t see how. The technology is already too diffuse and is utilized for so many alternative (and uncontroversial) uses that it doesn’t seem likely such a control regime would work or be acceptable. And if any government did take this extreme step, “global innovation arbitrage” would kick in. That is, innovators would just move offshore.
Could government officials ban the inputs used by 3D printers? Again, I don’t see how. After all, we are primarily talking about plastics and glue here!
Could government officials ban 3D printer blueprints? Two problems with that. First, such blueprints are a form of free speech and government efforts to censor them would represent a form of prior restraint that would violate the First Amendment of the U.S. Constitution. Second, even ignoring the First Amendment issues, information control is just damned hard and I don’t see how you could suppress such blueprints effectively when are they are freely available across the Internet. Or, people would just “torrent” them, as they do (illegally) with copyrighted files today.
Could government officials ban and/or fine specific companies (especially those with deep pockets)? Perhaps, but that is likely a losing strategy since 3D printing is already so highly decentralized and is done by average citizens in the comfort of their own home (and often for no monetary gain). So, attempting to go after a handful of corporate players and “make an example out of them” to deter others from experimenting isn’t likely to work. And, again, it’ll just lead to more offshoring and undergrounding of these devices and innovative activities.
Could government officials ban the sale of certain 3D printing applications? They could try, but enterprising minds would likely start using alternative payment methods (like Bitcoin) to conduct their deals. But the question of payments is largely irrelevant in many fields because much of this activity is non-commercial and open-source in character. People are freely distributing blueprints for 3D-printed prosthetics, for example, and they are even giving away the actual 3D-printed prosthetic devices to those who need them.
Could government officials just create a licensing / approval regime for narrowly-targeted 3D printed medical devices? Of course, but for all the reasons outlined above, it would likely be pretty easy to evade such a regime. Moreover, the very effort to enforce such a licensing regime would likely deter many beneficial innovations in the process, while also leading to the old cronyist problems associated with firms engaging in rent-seeking and courting favor with regulators in order to survive or prosper.
Anyway, you get the point: The practicality of control makes a difference and at some point the enormous costs associated with enforcement become an ethical matter in its own right. Stated differently, it’s not just that citizens should generally be at liberty to determine their own treatments and decide what drugs they ingest and what medical devices they use, it’s also the case that regulatory efforts aimed at limiting that right have so many corresponding enforcement costs that can spillover on to society more generally. And that’s an ethical matter of a different sort when you get right down to it. But, at a minimum, it’s an increasingly costly strategy and the costs associated with such technological control regimes should be considered closely and quantified where possible.
The Need for a Shift toward Risk Education
Let’s return to the question I raised above regarding the educational role that the FDA, or governments more generally, could play in the future. As I noted, a world in which citizens are granted the liberty to make more of their own health decisions is a world in which they could, at times, be rolling the dice with their health and lives. The highly paternalistic approach of modern food and drug regulation is rooted in the belief that citizens simply cannot be trusted to make such decisions on their own because they will never be able to appreciate the relative risks. You might be surprised to hear that I am somwhat sympathetic to that argument. People can and do make rash and unwise decisions about their health based on misinformation or a general lack of quality information presented in an easy-to-understand fashion. As a result, policymakers have taken the right to make these decisions away from us in many circumstances.
Although motivated by the best of intentions, paternalistic controls are not the optimal way to address these concerns. The better approach is rooted in risk education. To reiterate, the wise way to deal with the potential downsides associated with freedom of choice is to educate citizens about the relative risks associated with various medical treatments and devices, not to forbid them from seeking them out at all.
What does that mean for the future of the FDA? If the agency was smart, it would recognize that traditional command-and-control regulation is no longer a sensible strategy; it’s increasingly unworkable and imposes too many other costs on innovators and personal liberty. Thus, the agency needs to reorient its focus toward becoming a risk educator. Their goal should be to help create a more fully-informed citizenry that is empowered with more and better information about relative risk trade-offs.
Overcoming the Opposition & Getting Consent Mechanisms Right
Such an approach (i.e., shifting the FDA’s mission from being primarily a risk regulator to becoming a risk educator) will encounter opposition from strident defenders and opponents of the FDA alike.
The defenders of the FDA and its traditional approach will continue to insist that people can never be trusted to make such decisions on their own, regardless of how much information they have at their disposal or how many warnings we might give them. The problem with that position is that it treats citizens like ignorant sheep and denies them the most basic of all human rights: The right to live a life of your own choosing and to make the ultimate determinations about your own health and welfare. And, again, blindly defending the old system isn’t wise because traditional command-and-control regulatory methods are increasingly impractical and incredibly costly to enforce.
Opponents of the FDA, by contrast, will insist that the agency can’t even be trusted to provide us with good information for us to make these decisions on our own. Additionally, critics will likely argue that the agency might give us the wrong information or try to “nudge” us in certain directions. I share some of those concerns, but I am willing to live with that possibility so long as we are moving toward a world in which that is the only real power that the FDA possess over me and my fellow citizens. Because if all the agency is doing is providing us with information about risk trade-offs, then at least we still remain free to seek out alternative information from other experts and then choose our own courses of action.
The tricky issue here is getting consent mechanisms right. In fact, it’s the lynchpin of the new regime I am suggesting. In other words, even if we could agree that a more fully-informed citizenry should be left free to make these decisions on their own, we need to make sure that those individuals have provided clear and informed consent to the parties they might need to contract with when seeking alternative treatments. That’s particularly essential in a litigious society like America, where the threat of liability always looms large over doctors, nurses, hospital, insurers, and medical innovators. Those parties will only be willing to go along with an expanded “right to try” regime if they can be assured they won’t be held to blame when citizens make controversial choices that they advised them against, or at least clearly laid out all the potential risks and other alternatives at their disposal. This will require not only an evolution of statutory law and regulatory standards, but also of the common law and insurance norms.
Once we get all that figured out—and it will, no doubt, take some time—we’ll be on our way to a better world where the idea of having a “right to try” is the norm instead of the exception.
——
(My thanks to Adam Marcus for commenting on a draft of this essay. For more general background on 3D printing, see his excellent 2011 primer here, “3D Printing: The Future is Here.”)

July 31, 2015
What market failure? The weak transaction cost argument for TV compulsory licenses.
At the same time FilmOn, an Aereo look-alike, is seeking a compulsory license to broadcast TV content, free market advocates in Congress and officials at the Copyright Office are trying to remove this compulsory license. A compulsory license to copyrighted content gives parties like FilmOn the use of copyrighted material at a regulated rate without the consent of the copyright holder. There may be sensible objections to repealing the TV compulsory license, but transaction costs–the ostensible inability to acquire the numerous permissions to retransmit TV content–should not be one of them.
Economists can devise situations where transaction costs are immense and compulsory licenses are needed for a well-functioning market. Today, as when the compulsory license was created, the conventional wisdom is that TV compulsory licenses are still needed to prevent market failure.
In the 1970s, cable companies were capturing broadcast channels and retransmitting it to their subscribers for free because, per the Supreme Court, cable was a passive transmitter and didn’t need copyright permission. In 1976, to correct this perceived unfairness, Congress amended the Copyright Act and said this cable retransmission did necessitate copyright authorization. To make it easier on cable systems (most of which were small, local operations), the law created a compulsory license to broadcast TV content like NBC, ABC, and CBS programming.
The compulsory license primarily does two things: it provides cable operators local TV content royalty-free and provides non-local (“distant”) content (imagine a DC cable company importing a WGN broadcast from Chicago) at regulated rates.
As the House report says:
The Committee recognizes…that it would be impractical and unduly burdensome to require every cable system to negotiate with every copyright owner whose work was retransmitted by a cable system.
The Copyright Office, early on, opposed the compulsory license and has called for the repeal of the compulsory license to broadcast TV content since 1981. As the Register of Copyrights said at a 2000 congressional hearing,
A compulsory license is not only a derogation of a copyright owner’s exclusive rights, but it also prevents the marketplace from deciding the fair value of copyrighted works through government-set price controls.
But when the issue of repeal comes up, many parties cite “significant transaction costs” as a problem with conventional, direct licensing. GAO echoed these objections in an April 2015 report,
we have previously found that obtaining the copyright holders’ permission for all this content would be challenging. Each television program may have multiple copyright holders, and rebroadcasting an entire day of content may require obtaining permission from hundreds of copyright holders. The transaction costs of doing so make this impractical for cable operators.
That sounds sensible but we have powerful contradictory evidence: for decades, hundreds of TV channels requiring the bundling of thousands of copyright licenses are distributed seamlessly and completely outside of the compulsory license regime.
So it’s a mystery to me why analysts still talk about the difficulty in acquiring copyright permission from hundreds or thousands of rights holders. TV distributors outside of the compulsory license scheme do these complex content acquisition deals routinely. Hundreds of non-broadcast channels–like ESPN, CNN, Bravo, HGTV, MTV, and Fox News–are distributed to tens of millions of households via private contractual agreements and without regulated compulsory licenses. TBS, uniquely, in the late 1990s went from a broadcast channel, subject to a compulsory license, to a cable channel distributed via direct licensing with no apparent ill effects. Analysts raising the transactions costs for keeping compulsory licenses, to my knowledge, never explain why the market failure they predict is absent for these hundreds of cable and satellite channels.
Further, while cable and satellite companies don’t need to negotiate broadcast TV copyrights because of the compulsory license, the FCC’s retransmission consent process, part of the 1992 Cable Act, requires these companies to negotiate payment to retransmit broadcast signals–signals that contain the underlying copyrighted content. This process, though bizarre and artificial, is essentially the same negotiation cable and satellite companies would need to enter into in a world without compulsory license.
Finally, online programming from distributors like Hulu, Netflix, and (potentially) Apple TV operate entirely outside of the retrans-compulsory copyright system and undermine the transaction costs objection. Netflix, for instance, doesn’t negotiate with every individual right holder like GAO and Congress imply is necessary in a non-compulsory license regime. Content aggregators and intermediaries, not regulation, streamline the rights acquisition process without the need for a compulsory license. The ostensibly burdensome transaction costs don’t stop Netflix from licensing over 10,000 titles worth around $9 billion.
Certainly, converting from compulsory licensing to direct licensing has issues. Changing legal regimes can be costly and there is a need to prevent anticompetitive withholding of content. Understandably, many cable and satellite distributors oppose repeal of compulsory licenses if the complex FCC system of retransmission consent and must carry are maintained. I tend to agree. Nevertheless, it’s time to strike the transaction cost argument from the policy discussion. The predicted market failure is overcome by market forces.
For more background on TV regulation, see Adam Thierer and Brent Skorup, Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals (Mercatus working paper).

Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
