Adam Thierer's Blog, page 27

April 20, 2018

SESTA’s First Amendment Problems: 3 ideas of what a legal challenge might look like

The recently enacted Stop Enabling Sex Trafficking Act (SESTA) has many problems including that it doesn’t achieve its stated purpose of stopping sex trafficking. It contains a retroactivity clause that appears facially unconstitutional, but this provision would likely be severable by courts if used as the sole basis of a legal challenge. Perhaps more concerning are the potential First Amendment violations of the law.


These concerns go far beyond the rights of websites as speakers, but to the individual users’ content generation. Promoting sex trafficking is already a crime and a lawful restraint on speech. Websites, however, have acted broadly and quickly due to concerns of their new liability under the law and as a result lawful speech has also been stifled.


Given the controversial nature of the law it seems likely that a legal challenge is forthcoming. Here are three ideas about what a First Amendment challenge to the law might look like.



SESTA and Users’ Free Speech Rights


SESTA impacts individual users’ speech rights. As Elizabeth Nolan Brown writes, the law will create a chilling effect that could result in harming the very victims it claims to protect and could lead to further marginalizing minority viewpoints.


Despite their increasing presence and role in our everyday lives, Internet intermediaries, such as social media, are not public forums, but rather private actors. The recent Praeger case in California against YouTube has reinforced this point. As a result, they may choose to limit speech or actions in accord with terms of service or other policies.  Some would argue that moderation decision made in consideration of liability by these private actors do not constitute a violation of speech rights, but rather merely a modification of existing terms of service. However, this ignores both the chilling effects of such regulations and the fact that speech that would not be a violation of terms is likely to be removed as a result of broad interpretations of SESTA.


In the landmark case Reno v. ACLU, the Supreme Court recognized the problem of censoring online speech. In striking down the parts of the Communications Decency Act (CDA) other than Section 230’s liability protection, the Court stated, “[T]he CDA effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another. That burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the legitimate purpose that the statute was enacted to serve.” The results of SESTA have been a swift suppression of certain speech online and not just sex trafficking.


For example, Craigslist removed its entire personal section in response to the passage of SESTA. Ads that in no way could be considered a violation of either the terms of service or sex trafficking under federal laws were removed along with any potentially violative ads. Similarly, sex workers have expressed concerns sharing client information as a way to keep one another safe would be impossible under the statute as passed. Removing all this information also makes it more difficult for individuals trying  to help identify trafficking victims and facilitate their escape to find and assist victims and investigators. All of this information is lawful speech that will be either considered illegal or effectively eliminated by unnecessary burdens intermediaries must take to protect themselves from both criminal and civil liability.


The courts have generally favored allowing to disallowing speech. While minimal limits regarding time, manner, and place have been upheld in some cases and courts have found the state may regulate obscenity, speech restrictions are generally subject to strict scrutiny and must be narrowly tailored. SESTA uses broad definitions to classify what is considered sex trafficking and is likely to include both voluntary and involuntary interactions. Similarly the fact that the “participation in a venture” standard appears to set a low bar for an intermediary encourages an act first, question second behavior similar to that which has failed for the DMCA. To prevent liability under the statute, intermediaries must either increase moderation or cease moderating altogether. It is almost certain that lawful speech will regularly be caught up in such extreme moderation.


Finally, there are the concerns that chipping away at Section 230 liability opens the doors to broader Internet censorship. The Internet has been a stronghold of Free Speech where any idea can be expressed while well-intentioned laws like SESTA risk encouraging the idea that controversial or disliked speech can be censored.


Defining Intermediaries’ Editorial Control


Prior to Section 230 in Cubby v. Compuserve, the federal district court for the Southern District of New York found that Internet intermediaries act more like a distributor such as a bookstore or library than a traditional publisher. As a result, they have less control over the content created and distributed by their services than an editor or publisher would. Therefore, at common law, the intermediaries were found to have less liability for defamation or obscenity than a traditional publisher. This liability increases or decreases depending on the intermediary’s involvement with user generated content. Intermediaries who create or modify content are not acting as intermediaries and may be held liable if such content is illegal, such as sex-trafficking related content, even prior to SESTA.


The First Amendment Rights of Intermediaries


Intermediaries have free speech rights too. They may choose content to restrict or not restrict. Curation of content has been found to be protected as a form of speech for intermediaries such as search engines by several U.S. courts. In the pre-Internet Smith v. California case, the Supreme Court struck down the application of strict liability for obscene materials of a bookstore.  The court found that the lack of a knowledge requirement for criminal liability to attach was unconstitutional. SESTA requires knowledge but is vague regarding what knowledge an intermediary must have to be considered a participant in such a venture. Additionally, it gives broad power to state attorneys general to conduct investigation or take action with mere reasonable suspicion of a violation. One potential challenge would be whether the lack of a Good Samaritan clause and the vagueness regarding what constitutes knowledge in the statute violates the standards set in Smith.  Combined with the apparent protections of speech rights for intermediaries in the decisions to curate content, it may be possible for the intermediaries themselves to mount a First Amendment challenge.


Conclusion


SESTA has now become law, but it is almost certain it will face a constitutional challenge from users whose content was blocked or the intermediaries themselves on First Amendment grounds. In the past the courts have recognized the importance of maintaining free expression and a wide range of discourse online even when such content may be objectionable to many, one can only hope they would continue that line of thought if SESTA faces a First Amendment challenge.

 •  0 comments  •  flag
Share on Twitter
Published on April 20, 2018 08:56

April 19, 2018

Video from TPI Event on Regulating Facebook

On Monday, April 16th, the Technology Policy Institute hosted an event on “Facebook & Cambridge Analytica: Regulatory & Policy Implications.” I was invited to deliver some remarks on a panel that included Howard Beales of George Washington University, Stuart Ingis of Venable LLP, Josephine Wolff of the Rochester Institute of Technology, and Thomas Lenard of TPI, who moderated. I offered some thoughts about the potential trade-offs associated with treating Facebook like a regulated public utility. I wrote an essay here last week on that topic. My remarks at the event begin at the 13:45 mark of the video.



 

 •  0 comments  •  flag
Share on Twitter
Published on April 19, 2018 06:19

April 13, 2018

Some thoughts on how to fix rural broadband programs

Expanding rural broadband has generated significant interest in recent years. However, the current subsidy programs are often mismanaged and impose little accountability. It’s not clear what effect rural broadband subsidies have had, despite the amount of money spent on it. As economist Scott Wallsten has pointed out, the US government has spent around $100 billion on rural telecommunications and broadband since 1995 “without evidence that it has improved adoption.”


So I was pleased to hear a few months ago that the Montana Public Service Commission was making an inquiry into how to improve rural broadband subsidy programs. Montana looms large in rural broadband discussions because Montana telecommunications providers face some of the most challenging terrain the US–mountainous, vast, and lightly-populated. (In fact, “no bars on your phone” in rural Montana is a major plot element in the popular videogame Far Cry 5. HT Rob Jackson.)


I submitted comments in the Montana PSC proceeding and received an invitation to testify at a hearing on the subject. So last week I flew to Helena to discuss rural broadband programs with the PSC and panelists. I emphasized three points.



Federal broadband subsidy programs are facing higher costs and fewer beneficiaries.

Using FCC data, I calculated that since 1998, USF high-cost subsidies to Montana telecom companies have risen by about 40% while the number of rural customers served by those companies have decreased by over 50%. I suspect these trends are common nationally, and that USF subsidies are increasing while fewer people are benefiting.



Wireless broadband is the future, especially in rural areas.

“Fiber everywhere” is not a wise use of taxpayer funds and exurban and rural households are increasingly relying on wireless–from satellite, WISPs, and mobile. In 2016, the CDC reported that more households had wireless phone than landline phone service. You’re starting to see “cord cutting” pick up for broadband as well. Census surveys indicate that in 2013, 10% of Internet-using households were mobile Internet only (no landline Internet). By 2015, that percentage had doubled, and about 20% of households were mobile-only. The percentage is likely even higher today now that unlimited data plans are common. Someday soon the FCC will have to conclude that mobile broadband is a substitute for fixed broadband, and subsidy programs should reflect that.



Consumer-focused “tech vouchers” would be a huge improvement over current broadband programs.

Current programs subsidize the construction of networks even where there’s no demand. The main reason the vast majority of non-Internet users don’t subscribe to broadband is that they are uninterested in subscribing, according to surveys from the NTIA (55% are uninterested), Pew (70% are uninterested), and FCC and Connected Nation experts (63% are uninterested). With rising costs and diminishing returns to rural fiber construction, the FCC needs to reevaluate USF and make subsidies more consumer-focused. The UK for a couple years has pursued another model for rural broadband: consumer broadband vouchers. Since most people who don’t subscribe to broadband don’t want it, vouchers protect taxpayers from unnecessary expense and paying for gold-plated services.


For years, economists and the GAO have criticized the structure, complexity, and inefficiency of the USF programs, and particularly the rural program. The FCC is constantly changing the programs because of real and perceived deficiencies, but this has made the USF unwieldy. Montana providers participate in at least seven different rural USF programs alone (that doesn’t include the other USF programs and subprograms or other federal help, like RUS grants).


Unfortunately, most analysis and reporting on US broadband programs can be summed up as “don’t touch the existing programs–just send more money.” (There are some exceptions and scrutiny of the programs, like Tony Romm’s 2015 Politico investigation into the mismanagement of stimulus-funded Ag Department broadband projects.)


“Journalism as advocacy” is unfortunately the norm when it comes to broadband policy. Take, for instance, this article about the digital divide that omits mention of the $100 billion spent in rural areas alone, only to conclude that “small [broadband] companies and cooperatives are going it more or less alone, without much help yet from the federal government.”


(That story and another digital divide story had other problems, namely, a reliance on an academic study using faulty data purchased from a partisan campaign firm. FiveThirtyEight deserves credit for acknowledging the data’s flaws but that should have alerted the editors on the need for still more fact-checking.) 


States can’t rewrite federal statutes and regulations but it’s to the Montana PSC’s great credit that they sensed that all is not well. Current trends will only put more stress on the programs. Hopefully other state PUCs will see that the current programs do a disservice for universal service objectives and consumers.

 •  0 comments  •  flag
Share on Twitter
Published on April 13, 2018 08:39

April 11, 2018

The Backpage Takedown and the Risks of Over-regulating Technology

Last Friday, law enforcement agencies shutdown Backpage.com. The website has become infamous for its role in sex trafficking, particularly related to underage victims, and its shutdown is rightly being applaud by many as a significant win for preventing sex trafficking online. This shutdown shows, however, that prosecutors had the tools necessary to go after bad actors prior to the passage of the Stop Enabling Sex Traffickers Act (SESTA) last month. Unfortunately, this is not the first time the government has pushed for regulation of technology knowing it already had the tools and information needed to build a case against bad actors.


The version of SESTA passed by Congress last month included a number of poorly thought through components including an ex post facto application and poorly articulated definitions, but it passed both houses of Congress with little opposition. In fact, because the law was seen as a must pass and linked to sex trafficking, the Senate even overwhelming rejected an amendment to provide additional funding for prosecuting such crimes. Even without being signed into law, SESTA has already resulted in Reddit and Craigslist removing communities from their platforms within days of its passage. What this most recent event shows is the government already had the tools to go after the bad actors like Backpage, but failed to use them as Congress debated and passed a law that chipped away at the protection for the rest of the Internet and gave the government even broader powers.


This is not the first time that the government has encouraged through either its action or inaction damaging regulation of disruptive technology while knowing that it had tools at its disposal that could achieve the desired results without the need for an additional regulatory burden. In 2016, the government argued following the San Bernadino shootings that it need more access to encrypted devices like the iPhone when Apple refused to comply with a writ compelling it to unlock the shooters’ phones. The Senate responded to the controversy by proposing a bill that would require business like Apple to assist authorities in gaining access to encrypted devices. Thankfully, because the FBI was able to gain the information needed without Apple through a third party vendor, such calls largely diminished and the legislation never went anywhere.  Now, a recent Office of the Inspector General report has revealed the FBI “testified inaccurately or made false statements” regarding its ability to gain data from the encrypted iPhone.



It is highly concerning that when the government has the tools needed to engage in action to stop bad actors, but desires more regulatory power over tech it chooses to pursue regulation for all instead of using the proper tools it already has to pursue the bad actors. Rather than gaining new tools that risk ruining innovation, the government should first exhaust the tools they have to prosecute those bad actors. When they do use these tools against the likes of MyRedbook, Rentboy, and now Backpage, the prosecutions have been by and large successful. This continued pattern of behavior should raise heightened concerns about  calls for greater regulation of technology and whether the trade-offs such regulation would require are needed.


Neither the government’s desire for more regulation nor its negative impact is limited to technology. Research from the Mercatus Center has shown that the cumulative effect of regulation has slowed GDP growth by 0.8% per year since 1980. Particularly for new startups these regulatory burdens increase the cost of even entering the increasingly global marketplace due to both increased compliance costs and fears of company ending litigation.


With the awareness that these additional regulations are often unnecessary and harmful to both technology and the economy more generally, there should be heightened concern for calls to give regulators additional tools in light of specific events. These calls for regulation have once again arisen with recent fatal Uber autonomous vehicle accident and the Facebook scandals. These regulations may actually make problems worse not better by creating a regulated monopoly that prevents new entrants from improving quality and increasing competition. As Mark Zuckerberg noted while answering a question during Congressional testimony yesterday when there are more rules it is easier for larger companies to comply with them than smaller companies.  Additional regulations make it more difficult for us to get the next Facebook, the next Google, or the next Uber.


The overall framework of Permissionless Innovation put in effect during the Clinton administration has allowed the Internet to flourish and the US to become a global leader in Internet innovation and we must not let the failure to use the tools available deceive us into believing that such an environment does not work. Regulation is sometimes necessary, but over regulation, particularly of technology, poses significant risks that must be considered in more than just a reactive fashion.

 •  0 comments  •  flag
Share on Twitter
Published on April 11, 2018 07:04

April 10, 2018

The Week Facebook Became a Regulated Monopoly (and Achieved Its Greatest Victory in the Process)

With Facebook CEO Mark Zuckerberg in town this week for a political flogging, you might think that this is darkest hour for the social networking giant. Facebook stands at a regulatory crossroads, to be sure. But allow me to offer a cynical take, and one based on history: Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.


By slowly capitulating to critics (both here and abroad) who are thirsty for massive regulation of the data-driven economy, Facebook is setting itself up as a servant of the state. In the name of satisfying some amorphous political “public interest” standard and fulfilling a variety of corporate responsibility objectives, Facebook will gradually allow itself to be converted into a sort of digital public utility or electronic essential facility.


That sounds like trouble for the firm until you realize that Facebook is one of the few companies who will be able to sacrifice a pound of flesh like that and remain alive. As layers of new regulatory obligations are applied, barriers to new innovations will become formidable obstacles to the very competitors that the public so desperately needs right now to offer us better alternatives. Gradually, Facebook will recognize this and go along with the regulatory schemes. And then eventually they will become the biggest defender of all of it.


Welcome to Facebook’s broadcast industry moment. The firm is essentially in the same position the broadcast sector was about a century ago when it started cozying up to federal lawmakers. Over time, broadcasters would warmly embrace an expansive licensing regime that would allow all parties—regulatory advocates, academics, lawmakers, bureaucrats, and even the broadcasters themselves—to play out the fairy tale that broadcasters would be good “public stewards” of the “public airwaves” to serve the “public interest.”


Alas, the actual listening and viewing public got royally shafted in this deal. Broadcasters got billions of dollars’ worth of completely free beachfront spectrum along with protected geographic monopolies. Congressional lawmakers and the unelected bureaucrats at the FCC got power to tinker with broadcast content and received other special favors (like free airtime) from their cronies in the industry. People, money, and influence floated freely between the political and business realms until at some point there really wasn’t much distinction between them. Meanwhile, the public got stuck with bland fare and limited competition for their ears and eyes. The “public interest” ended up meaning many things during this time, but it rarely had much to do with what the public actually desired—namely, more and better options for a diverse citizenry.


Of course, much the same story played out in the U.S. telecommunications market a few decades prior to the broadcast industry making their deal with the devil. The early history of telecommunications in America was characterized by competition among a variety of local and regional rivals. But it was derailed by political shenanigans. Here are a few choice paragraphs about the cronyist origins of the Bell System monopoly from a law review article that Brent Skorup and I wrote back in 2013 [footnotes omitted]. As you read it, imagine how similar well-intentioned regulations might play out for Facebook:


… this intensely competitive, pro-consumer free-for-all would be derailed by AT&T’s brilliant strategy to use the government to accomplish what it could not in the free market: eliminate its rivals. In 1907, Theodore Newton Vail became AT&T’s president. He had a clear vision: achieving “universal service” (in the form of interconnected and fully integrated systems) by eliminating rivals and consolidating networks. Befriending lawmakers and regulators was a crucial component of this strategy. While many policymakers nominally supported the idea of competition, they were more preoccupied with achieving widespread, interconnected network coverage. Vail capitalized on that impulse.


On December 19, 1913, the government and AT&T reached the “Kingsbury Commitment.” Named after AT&T vice president Nathan C. Kingsbury, who helped negotiate the terms, the agreement outlined a plan whereby AT&T agreed not to acquire any other independent companies while also allowing other competitors to interconnect with the Bell System. The Kingsbury Commitment was thought to be pro-competitive, yet it was hardly an altruistic agreement on AT&T’s part. Regulators did not interpret the agreement so as to restrict AT&T from acquiring any new telephone systems, but only to require that an equal number be sold to an independent buyer for each system AT&T purchased. Hence, the Kingsbury Commitment contained a built-in incentive for network swapping (trading systems and solidifying territorial monopolies) rather than continued competition.  “The government solution, in short, was not the steamy, unsettling cohabitation that marks competition but rather a sort of competitive apartheid, characterized by segregation and quarantine,” observe telecom legal experts Michael Kellogg, John Thorne, and Peter Huber.  Thus, the move toward interconnection, while appearing to assist independent operators, actually allowed AT&T to gain greater control over the industry.


“Vail chose at this time to put AT&T squarely behind government regulation, as the quid pro quo for avoiding competition,” explains [Richard] Vietor.  “This was the only politically acceptable way for AT&T to monopolize telephony,” he notes.  AT&T’s 1917 annual report confirms this fact, stating, “[with a] combination of like activities under proper control and regulation, the service to the public would be better, more progressive, efficient, and economical than competitive systems.”


So much for “the public interest”! If the last century’s worth of communications and media regulation teaches us anything, it’s that good intentions only get you so far in this world. Many of the lawmakers and regulators who allowed themselves to be duped by big corporations asking for protection from competition probably thought they were doing the right thing. Those policymakers may even have believed that they were actually encouraging innovation and competition through some of their regulatory actions. Alas, things did not turn out that way. We the public were denied real, meaningful choices and innovations because of these misguided policies.


And so now it’s Facebook’s turn to become part of this sordid tale. Zuckerberg has already made it clear that he is open to regulation and that his firm would also start enforcing new European data rules globally. And after this week’s political circus in Congress, the floodgates will be wide open and everyone’s regulatory pet peeve will be up for political consideration, which is exactly what happened for broadcasters and communications in past decades.


Every crackpot idea under the sun will be on the table but the most extreme versions of those proposals will be beaten back just enough to ensure that Facebook can offer up its pound of sacrificial flesh each time without running the risk of killing the patient entirely. Again, this was always part of the broadcast and communications regulatory playbook as well. So long as they were guaranteed a fairly stable market return and protection from pesky new innovators, the firms were willing to go along with the deal.


The “deal” in this case between Facebook and regulators won’t be so explicitly cronyist as it was for broadcasters and communications companies, however. The days of price controls, rate-of-return regulation, and formal line of business restrictions are likely over. Everyone now recognizes that regulations creating formal barriers to innovation and entry are a bad idea and, as a result, they are usually rejected.


But laws and regulations can sometimes create informal or hidden barriers to innovation and entry, even when they are well-intentioned. And that’s what could happen here as this latest Facebook fiasco leads to calls for seeming innocuous things like transparency and disclosures requirements, restrictions on “bad speech,” advertising and data collection regulations, “fiduciary” responsibilities, “algorithmic accountability” efforts, and so on. Facebook hasn’t wanted to adopt some of these things in the past, but now they’ll be pushed aggressively to do so by policymakers and regulatory activists. As Zuckerberg and Facebook cozy up with policymakers and regulatory activists and begin talking about a “broader view of responsibility,” the transition to the firm’s next phase as a quasi-public utility will get underway.


The rich irony of all this is that the same regulatory advocates who are cheering on this week’s developments as well as the coming regulatory avalanche will be the ones howling the loudest if and when only Facebook is left standing in the social media universe. In fact, that’s already happened in Europe where policymakers and their burdensome top-down data protection regulations have driven most digital innovators and investors to other continents, leaving only Facebook, Google, and handful of other (mostly U.S.-based) companies left to regulate. And then European policymakers have the audacity to cry foul about the market power of these firms! It boggles the mind how European policymakers and regulatory advocates see zero connection between their heavy-handed approach to the Digital Economy and the corresponding lack of enough competitors in those sectors.


But none of that will make any difference to the regulatory advocates. They want that pound of flesh, and they are going to get it. And then in Facebook they will have a regulatory plaything to toy with for years to come.


What about the public? Will we really be any better off because of any of this? How many people will want to stick with Facebook if it becomes a digital public utility or a social media version of the Post Office? That sure doesn’t sound like much fun for us. But if the new regulations imposed on Facebook do end up hurting smaller rivals more and create barriers to new entry and innovation going forward, then it’s unclear whether it makes any difference what we want because the options just won’t be there for us.


With time, Facebook will not only become more comfortable with its new regulatory status for that reason but then in the name of ensuring a “level playing field,” the firm will simultaneously advocate that each and every new rule be applied to all its rivals. Again, this is how well-intentioned regulation ends up indirectly discouraging the very innovation and competitive options that we need. Broadcasters and communications companies played the “level playing field” card at every juncture to beat down new technologies and rivals.


Finally, at some point, don’t be surprised if all roads lead back to prices for digital services. Right now, social networking services like Facebook are free-of-charge to consumers and digital companies use advertising to support their services. Many regulatory advocates have suggested that this sort of business model is fundamentally incompatible with privacy and have wanted it strictly curtail if not ended altogether. Of course, if you ask the public how many of them would be willing to pay $19.95 a month for Facebook, you won’t get many takers.


I wrote a couple of law review articles talking about the “privacy paradox” and consumer “willingness to pay” for privacy more generally. All the evidence suggests that consumer willingness to pay for privacy is significantly lower than privacy advocates would prefer. But if in the name protecting privacy, prices get pushed or imposed as a matter of public policy, then we will have entered a truly surreal moment in the history of regulatory policy because we will have inverted the presumption that consumer welfare is better served by lower prices. Over the past century, the purpose of most public utility regulation was lower prices, higher quality, and more choice. The modern Digital Economy has largely achieved those goals without heavy-handed regulation. But now, with the emerging regulatory regime looming for Facebook and social media more generally, we might end up with a sort of bizarro policy world in which we make people pay more in the name of making them better off!


I hope I’m wrong about everything I’ve said here. It would be troubling if we enter an era of less competition, less innovation, and lower quality information services. But to borrow a quote from my favorite sci-fi show, “all of this has happened before, and all of this will happen again.” And regulatory history tends to repeat. We shouldn’t be surprised, therefore, when some forget the ugly history of public utility-style regulation or broadcast era “public interest” mandates and we find ourselves stuck right back in the hole that we’ve been trying to dig ourselves out of for so many decades.

 •  0 comments  •  flag
Share on Twitter
Published on April 10, 2018 13:30

March 27, 2018

4 Possibilities for the Future in a Post-SESTA World

SESTA passed the Senate last week after having previously passed the House. President Trump is expected to sign it into law despite the opposition to this version of the bill from the Department of Justice. As I have previously written about, there are a great deal of concerns about how the bill may actually make it harder to address online sex trafficking and more generally impact innovation on the Internet.


The reality is that we are looking at a post-SESTA world without the full protection of Section 230 and that reality will likely end up far from the best case scenario, but hopefully not fully at the worst. Intermediaries, however, do not have the luxury to wait around and see how the law actually plays out, especially given its retroactive provision. As a result, Reddit has already deleted a variety of sub-reddits and Craigslist has closed its entire personals section. One can only imagine the difficult decisions facing the creators of dating apps or messaging services.


So what can we expect to happen now…


1.    Questions remain about how often the law will be used, and whether the civil


Prosecutors just a few years ago were given additional criminal resources to prosecute sex trafficking in the SAVE Act. Yet, these tools have rarely been used due to the difficulty in prosecuting such crimes. Similarly, most civil litigation settles out of court. Especially given the potential PR nightmares of being seen as not believing victims or favoring bad actors if a civil case does go to trial, there will be a great deal of pressure on intermediaries to settle out of court whether they engaged in unlawful actions or not. The push for settlement will likely be even stronger for smaller companies who lack the resources to hire legal teams, fund litigation, and risk greater damage to the business.


2.   However, at some point SESTA will likely end up in court and likely face a constitutional challenge.


The response on the part of websites to the requested changes seems to have been swift and far-reaching. Given that SESTA presents First Amendment challenges and has a most likely unconstitutional retroactive provision, the question seems to be who and when the law will be challenged in courts.


The retroactive nature of the law appears facially unconstitutional. It is, however, likely the courts would be able to sever this provision from the rest of the law. This would fix some of the minor issues with establishing liability after a decision regarding moderation was made, but would not fix the broader innovation and speech quashing concerns of the law.


The First Amendment challenges could come either from sex workers whose lawful speech is being silenced or from those not at all related to sex work whose innocent actions were censored as a result of an intermediary’s low risk tolerance due to increased liability under SESTA.


3.    Big intermediaries like Facebook and Google will adjust, but new intermediaries may struggle to get off the ground.


Facebook deletes over 1 million accounts a day. Various tech and app companies are estimated to employ over 100,000 moderators to evaluate user generated content. This work is deeply disturbing and has a high human toll for those engaged in it as other technology has not been able to replace the ability of human moderators to make certain distinctions. Large companies might be able to adapt by hiring more moderators or deleting certain user communities for potential liability raising areas, but smaller companies will be even less able to compete and adapt.


SESTA may prevent us from getting the next Google, Facebook, or Paypal for three key reasons. First, it raises the initial cost of launching a product that has user generated content by requiring additional moderators just to get off the ground. Second, it is likely to make funders less likely to invest in new intermediaries like messaging and dating services if they are concerned that the company is likely to get sued. Third, it may prevent existing small and mid-size tech or app companies in areas like social media or messaging from expanding or innovating in areas that are likely to have interactions between users due to concerns about liability.


For all the concerns that tech is getting too centralized in a few companies, there seems to be little attention paid to the fact that raising the liability risks through laws such as SESTA may result in a scenario where only those few big companies can comply.


4.    It sets an uneasy precedent for further eroding Section 230.


This is perhaps the greatest concern. Sex-trafficking is evil, but prosecutors had the tools to go after it and Section 230 already had a carve out for federal crimes. SESTA signals that a legislative reaction to a single or few bad actors’ actions online can result in chipping away at the protection that has allowed the Internet to flourish. It shows that such actions are often not narrowly tailored due to their reactive nature. Especially as there are growing concerns about various individual actors, we must remember that broad legislation risks making it difficult for good actors and new challengers to try to take their place. A post-SESTA world signals that while Section 230 may still exist, it is far to easily eroded for all when concerns about the bad actions of a few arise.


 


What happens over the next few months and years as both new and existing intermediaries try to adapt will greatly influence the future of the Internet and its ability to be a tool for global connectedness. As Senator Wyden said following the rejection of an amendment to SESTA to fund sex trafficking prosecutions, “I anticipate having to turn back to this topic in short order after the effects of this bill become clear.” How swiftly those effects are felt by everyone and whether the reality of their damage to innovation is clear to policymakers remains unknown, but that such effects will occur in one form or another cannot be disputed.

 •  0 comments  •  flag
Share on Twitter
Published on March 27, 2018 10:11

March 20, 2018

Thoughts on the FCC’s recent wireless deployment efforts

Years ago it looked like the Obama FCC would make broadband deployment, especially wireless service and spectrum reform, a top priority. They accomplished plenty–including two of the largest spectrum auctions to date–but, under tremendous political and special interest pressure, FCC leadership diverted significant agency resources into regulatory battles that had very little upside, like regulating TV apps and unprecedented regulation of Internet services.


Fortunately, the Trump FCC so far has made broadband deployment the agency’s top priority, which Chairman Pai signaled last year with the creation of the Broadband Deployment Advisory Committee. As part of those deployment efforts, Commissioner Carr has led an effort to streamline some legacy regulatory obstacles, like historic preservation and environmental reviews and the FCC will vote this week on an order to expedite wireless infrastructure construction.


According to the FCC, somewhere around 96% of the US population has LTE coverage from three or more wireless operators, like Verizon, AT&T, T-Mobile, and Sprint. The operators’ job isn’t done in rural areas, but much of the future investment into broadband networks will be to “densify” their existing coverage maps with “small cells” in order to provide wireless customers more bandwidth.


Since telecom companies build infrastructure, many current projects require review under the federal National Historic Preservation Act and the National Environmental Policy Act. However, unlike for the 100-foot cellphone towers in the past, the environmental checklists currently required for small cells are largely perfunctory since small cells typically use existing infrastructure, like utility poles. For Sprint’s tens of thousands of small cell site applications, for instance, the proposed order says “every single review resulted in a finding of no significant impact.”


The order under consideration will bring some structure to regulatory timelines and procedures. This should save carriers on unnecessary regulatory overhead and, more importantly, save time.


The order comes at a crucial time, which is why the prior FCC’s net neutrality distractions are so regrettable. Mobile broadband has huge demands and inadequate infrastructure and spectrum. According to studies, millions of Americans are going “mobile only,” and bypassing landline Internet service. Census Bureau surveys estimated that in 2015, about 20% of Internet-using households were mobile-only. (HT to Michael Horney.) That number is likely even higher today.


The construction of higher-capacity and 5G wireless, combined with repeal of the 2015 Internet regulations, will give consumers more options and better prices for Internet services, and will support new mobile applications like remote-control of driverless cars and AR “smart glasses” for blind people. Hopefully, after this order, the agency will continue with spectrum liberalization and other reforms that will expedite broadband projects.

 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2018 11:55

We Need More Driverless Cars on Public Roads, Not Fewer

By Adam Thierer and Jennifer Huddleston Skees


There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.


While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.


Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die  and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally.


No matter how concerned the public is about the idea of autonomous vehicles on our roadways, one thing should be abundantly clear: Automated technologies can be part of the solution to the harms of our almost 100 year experiment with human drivers behind the wheel of a car. The algorithms behind self-driving cars don’t get drunk, drowsy, or distracted. Unfortunately, humans do those things with great regularity and the only way for autonomous vehicles to truly understand how to deal with idiosyncrasies and irrationalities of human drivers is to interact with them in the “real world.” Every time a human driver gets behind a wheel for a drive, therefore, an “experiment” of sorts is underway and we’ve seen the results of our human driven “experiments” on public roads far too often have catastrophic results.


Because these human-caused accidents are so common, they don’t make headlines. While as high as 83% of people admit they are concerned about safety when driving, the aggregate death toll is so large that the numbers aren’t as easy to “humanize” when crashes occur unless they involve people or places we know. As a result, we don’t heed the warnings and continue to engage in risky behavior by choosing to drive every day.  But precisely because this week’s driverless car-related death in Arizona is so unique and rare, it is making major news. If we turn a blind eye to all the lives lost due to human error while focusing on the rare occurrence of this one driverless car fatality, we risk many more lives in the long run.


But what should be done when accidents or deaths occur and autonomous cars are involved?


First, we can dispense with the notion that driverless cars are completely unregulated. Anytime these vehicles are operating on public roadways, they still must comply with traffic and safety laws. Driverless cars are programmed to operate in compliance with those laws and will be far more likely to do so than human operators. In fact, the concern is not that the cars won’t follow the traffic laws, but how they will interact with humans’ lawlessness and our misguided reactions to them.


Second, when accidents, like the one in Arizona this week do occur, courts are equipped to handle legal claims. This is how we have handled human-created accidents for decades, and there is no reason to believe that the common law and courts can’t evolve to handle new technology-created problems, too. The courts have an existing toolkit for handling both defective products and individual liability or bad actors. Some manufacturers have even publicly stated they will accept liability if it is shown that the technology behind the autonomous vehicle caused the accident. Courts have been able to apportion fault and deal with the specifics of particular without the need to completely overhaul the common law for a variety of new technologies throughout history. It would be misguided to assume the courts could not determine the true cause of an accident when it involved an autonomous vehicle when the courts have been dealing with increasingly sophisticated products in a variety of fields for years.


Third, driverless car innovators are currently working together, and with government officials, to address the safety and security of these technologies. In both the Obama and current Trump Administrations, an open, collaborative effort has been underway to sketch out sensible safety and security policies while also making sure to keep the innovation moving forward in this field. These conversations have resulted in guidance from the Department of Transportation that is flexible enough to adapt to the emerging technology while still promoting safe development and deployment.  This flexible approach is the smart path forward insuring that we don’t let overly precautionary concerns prevent technology that could save many, many more lives.


The most effective way to achieve significant auto safety gains is to make sure experimentation with new and better automotive technologies continues. That cannot all happen in a closed lab setting that is stifled by heavy-handed regulation at every juncture. We need driverless cars on the roadways now more than ever precisely because those machines will need to learn to anticipate and correct for the many real-world scenarios that human drivers struggle with every day.


Any loss of human life is a tragedy. But we cannot let a rare incident cost us the long-term potential life-saving technology of autonomous vehicles. We also must not rush to conclusions that technology was at fault before knowing all the facts of any particular situation. While Uber has temporarily halted its technology trials, this tragic accident should be looked at as a rarity we can learn from rather than a reason to stop moving forward.

 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2018 09:13

March 15, 2018

The government’s “talking cars” plans failed. What’s next for the spectrum?

In the waning days of the Obama administration, the US Department of Transportation (USDOT) proposed to mandate a government-designed “talking cars” technology–so-called DSRC devices–on all new cars. Fortunately, in part because of opposition from free-market advocates, the Trump administration paused the proposed mandate. The FCC had set aside spectrum in the 5.9 GHz band for DSRC technologies in 1999 but, since it’s been largely unused since then, these new developments raise the question: What to do with that 75 MHz of fairly “clean” spectrum? Hopefully the FCC will take the opportunity to liberalize the use of the DSRC band so it can be put to better uses.


Background


Since the mid-1990s, the USDOT and auto device suppliers have needed the FCC’s assistance–via free spectrum–to jumpstart the USDOT’s vehicle-to-vehicle technology plans. The DSRC disappointment provides an illustration of what the FCC (and other agencies) should not do. DSRC was one of the FCC’s last major “beauty contests,” which is where the agency dispenses valuable spectrum for free on the condition it be used for certain, narrow uses–in this case, only USDOT-approved wireless systems for transportation. The grand plans for DSRC haven’t lived up to its expectations (USDOT officials in 2004 were predicting commercialization as early as 2005) and the device mandate in 2016–now paused–was a Hail Mary attempt to compel widespread adoption of the technology.


Last year, I submitted public interest comments to the USDOT opposing the proposed DSRC mandate as premature, anticompetitive, and unsafe (researchers found, for instance, that “the system will be able to reliably predict collisions only about 35% of the time”). I noted that, after nearly 20 years of work on DSRC, the USDOT and their hand-selected vendors had made little progress and were being leapfrogged by competing systems, like automatic emergency brakes, to say nothing of self-driving cars. The FCC has noticed the fallow DSRC spectrum and Commissioners O’Rielly and Rosenworcel proposed in 2015 to allow other, non-DSRC wireless technologies, like WiFi, into the band.


The FCC’s Role


These DSRC devices use spectrum in the 5.9 GHz band. The FCC set aside radio spectrum in the band for DSRC applications in 1999 based on a scant 19 comments and reply comments from outside parties. 


Despite the typical flowery language in the 1999 Order, FCC commissioners and Wireless Bureau staff must have had an inkling this was not a good idea. After decades of beauty contests, it was clear the spectrum set-asides were inefficient and anticonsumer, and in 1993 Congress gave the FCC authority to auction spectrum to the highest bidder. The FCC also moved towards “flexible-use” licenses in the 1990s, thus replacing top-down technology choices with market-driven ones. The DSRC set-aside broke from those practices, likely because DSRC in 1999 had powerful backers that the FCC simply couldn’t ignore: the USDOT, device vendors, automakers, and some members of Congress.


The FCC then codified the first DSRC standards in 2003. However, innovation at the speed of government, it turns out, isn’t very speedy at all. The fast-moving connected car industry simply moved ahead without waiting for DSRC technology to catch up. (Government-selected vendors making devices according to 15-year old government-prescribed technical standards on spectrum allocated by the government in 1999. Gee, what could go wrong?)


A Second Chance


So if the DSRC plans didn’t pan out, what should be done with that spectrum? Hopefully the FCC will liberalize the band and, possibly, combine it with the adjacent bands.


The gold standard for maximizing the use of spectrum is flexible-use, licensed spectrum, so the best option is probably liberalizing the DSRC spectrum, combining it with the adjacent higher band (5.925 GHz to 6.425 GHz) and auctioning it. In November 2017, the FCC asked about freeing this latter band for flexible, licensed use.  


The other (probably more popular) option is liberalizing the DSRC band and making it available for free, that is, unlicensed use. Giving away spectrum for free often leads to misallocation but this option is better than keeping it dedicated for DSRC technology. Unlicensed is for flexible uses and allows for many consumer technologies like WiFi, Bluetooth, and unlicensed LTE devices.


Further, because of global technical standards, unlicensed devices in the DSRC band make far more sense, it seems to me, in 5.9 GHz than in the CBRS band* (3.6 GHz), which many countries are using for licensed services like LTE. The FCC is currently trying to simplify the rules in the CBRS band to encourage investment in licensed services, and perhaps that’s a compromise the FCC will reach with those who want more unlicensed spectrum: make 3.6 GHz more accommodating for licensed, flexible uses but in return open the DSRC band to unlicensed devices.


Either way, the FCC has an opportunity to liberalize the use of the DSRC band. Grand plans for DSRC didn’t work out and hopefully the FCC can repurpose that spectrum for flexible uses, either licensed or unlicensed.


 


 


*Technically, the GAA devices in the CBRS band are non-exclusive licenses, but the rules intentionally resemble an unlicensed framework.

 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2018 08:15

March 8, 2018

Good Intentions Risk Changing the Internet (and Not Just for the Better)

While the Net Neutrality debate has been in the foreground, Congress has been quietly moving forward legislation that risks fundamentally modifying the liability protection for Internet intermediaries like Facebook, Google, and PayPal, and forever changing the Internet. The proposed legislation has good intentions of stopping sex trafficking, but in an effort to stop a few bad actors the current overly broad version of the bill risks not only stopping the next Internet innovation, but also failing to achieve even this laudable goal.



Where Are We Now: A Legislative Update


As I have written earlier, the House and the Senate version each introduced bills nobly aimed at preventing and fighting sex trafficking. The House bill, FOSTA, was amended during the committee process and these significant changes minimized many of the most concerning elements of the original version of the legislation. The bill still had many flaws including standards that remained vague and did not account for a website’s size, but it was generally applauded as a significant step towards achieving its goal while minimizing the damage to free expression on the Internet. The Senate bill, SESTA, retained many of the concerns of the initial FOSTA bill. Before the House voted, FOSTA was amended to include all elements of SESTA both good and bad. The bill with SESTA attached passed the House and now proceeds to the Senate where a vote is expected next week.


The Continuing Problems of FOSTA/SESTA


According to Internet law professor Eric Goldman, unfortunately the House passed FOSTA now represents the worst of both worlds and could have far reaching implications not just for those engaged in detestable practices but also for advocates, social media, and free speech online more generally. The current version of the bill has also been criticized by many including not only the tech community, but also the prosecutors at the Department of Justice.


There are at least three primary issues remaining in the FOSTA/SESTA legislation as proposed.


First, it could make the problem of identifying and rescuing victims more difficult for advocates. This is for two main reasons. As law professor Ariel Levy pointed out even if the bill succeeds in removing sex trafficking online, it will only push the true perpetrators of these acts further underground making it harder for those seeking to monitor and prosecute such crimes to find victims. It also risks silencing the spread of information to help victims due to broad language in the law and the difficulty companies would have in distinguishing such messages. Finally, the law does not distinguish forced from voluntary transactions. Advocates for sex workers have expressed concerns that the law would prevent the sharing of information that has increased safety.


Second, it could actually make it more difficult for prosecutors to go after perpetrators of these crimes. The Department of Justice letter points out that the vague language such as “participation in a venture” will make it harder to prosecute wrongdoers. As I have previously discussed, prosecutors have the tools and should be encouraged to use them. Mike Masnick recently pointed out that while the bill creates a new crime, it is already illegal to engage in and advertise sex-trafficking. The current vagueness and imposition of new liability on third parties not actively engaged in trafficking could make it harder for prosecutors to use the tools they have to go after the actual traffickers.


Finally, as Rep. Justin Amash questioned in the immediate aftermath of its passage the bill as currently written could easily be interpreted as allowing for ex post facto liability and prosecutions. The version passed by the House expressly allows the prosecution of actions that would have been illegal under the law even if the actions occurred years before its passage. If such provisions were enforced, it’s plausible the courts could find the statute facially unconstitutional.


Potential Solutions


Section 230 immunity has allowed the Internet to flourish for over 20 years. Without such protections, it is unlikely that many user generated communities like social media sites or messaging services would have developed.  Since the Senate has not yet voted on the bill, there is still time to leave Section 230 as it currently functions or for amendments that could minimize the risks described above.


First, as suggested by the Department of Justice letter attention should be given to vague definition of participation to limit the application of the law to only those who actively engage in such acts. The current language means that a search engine, payment processor, or social media site could be found liable for even a single transaction by a user. Clear definitions are particularly important given they impact not only civil liability but also the creation of a new crime.


Second, the intent requirements could be raised to limit the law only to those with truly bad intentions and protect Good Samaritan actors who accidentally make a mistake. The current version has a relatively low requirement for liability. A recent Wall Street Journal editorial pointed out that an attorney would only need to show that the website “should have known” not that they actually knew this behavior was going on in order to bring a lawsuit. As a result, intermediaries are most likely to engage in aggressive censorship. This could result in wrongfully silencing advocates as discussed above. Of course, others could choose not to engage in moderating at all out of a fear that they will be found to have knowledge. Ideally, a provision to protect moderator actions and a heightened mens rea requirement would minimize these risks.


Third, remove any ex post facto applications of the statute. A website could not have taken additional steps to comply with a law that existed prior to its passage, so should only reasonably be held liable for actions that occur since the law’s passage. Even for seemingly innocuous social media websites like Facebook or search engines like Google the new standard would require significantly more resources devoted to monitoring than they already engage in. Given that the law would undo two decades of status quo for moderation, it seems providing intermediaries a few months to insure they have the necessary resources is a reasonable change.


Section 230 has worked to allow the Internet to flourish in ways that could not have been predicted 20 years ago. Any changes to Section 230 liability protection are likely to have far reaching implications for the Internet and innovation. While these changes may be brought with good intentions, they risk fundamentally changing  nature of new communications tools and doing quite a bit more than just targeting bad actors.

 •  0 comments  •  flag
Share on Twitter
Published on March 08, 2018 07:07

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.