Adam Thierer's Blog, page 29
December 15, 2017
Revised FOSTA is a big improvement over SESTA—but still not perfect
The house version of the Stop Enabling Sex Trafficking Act (SESTA), called the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), has undergone significant changes that appear to enable it to both truly address the scourge of online sex trafficking and maintain important internet liability protection that encourages a free and open internet. On Tuesday, this amended version passed the House Judiciary Committee. Like most legislation, this latest draft isn’t perfect. But it has made significant steps towards maintaining freedom online while addressing the misdeeds of a few.
The Good
First, the new version creates a new crime that targets online sex traffickers and those wrong-doers who intentionally promote or facilitate their actions. Earlier versions of the House and Senate sex trafficking bills created mens rea, or state of mind, issues whereby a website was compelled to engage in strict moderation for fear of something “falling through the cracks” while encouraging good behavior on the part of intermediaries. The new FOSTA proposal substitutes a higher standard, which largely obviates these concerns.
The revised bill also clearly focuses on sex trafficking and online prostitution rather than attacking potential “bad actions” online more generally. Even so, some are concerned about the impact this revised focus may have on consensual transactions or protected (even if objectionable) speech. However, combined with the creation of a new crime under the Mann Act, it appears to remove most of the early concerns that the new law could be applied too broadly and chip away at Section 230. Indeed, the language of the new bill makes it clear that Section 230 was “never intended to provide legal protection to websites that unlawfully promote and facilitate prostitution and contribute to sex trafficking.”
The revised bill creates civil liability only when a violation of the new criminal law has already occurred. This prevents someone from going after an intermediary merely because they have “deeper pockets” than the actual perpetrators. By requiring an intermediary also be guilty of a criminal violation, it limits the likelihood that individuals would be successful in such suits except in cases where the website had knowingly facilitated or actively encouraged such violations of the law.
Finally, the revised FOSTA relies on a national standard instead of a patchwork of state law claims. Given the truly global nature of the Internet, this provides greater certainty for intermediaries regarding under what standard they will be held liable.
The Remaining Questions/Concerns
The current version of the bill uses a standard of 5+ victims for the new criminal enhancement. But there is a problem with using a raw number of victims, as Eric Goldman points out. He lays out a thought experiment: let’s say that a larger website like Google or Facebook, has 0.01% of its usage dedicated to prostitution. That’s about 100,000 people. Goldman points out that even if these companies were 99.99% compliant in taking down this activity—a worthy feat, to be sure—some would surely still fall through the cracks. The 5+ standard could make the social platforms look like “hotbeds of prostitution activity” despite their best intentions. A simple solution would be to switch from a raw number of “victims” to a percentage of users or revenues before attaching criminal or civil liability.
Additionally, there are some concerns about whether the new law could still make things worse for victims. As one advocate wrote, putting victims on the street rather than online may make them much more likely to be subject to violence and may make it more difficult to identify and assist trafficking victims. Unfortunately, the dangers and harms associated with trafficking and sex work cannot be resolved by a single bill.
Like most legislation, FOSTA is not perfect, but the current version does avoid the most damaging elements of earlier iterations. The changes also show that legislators are becoming aware of the possible unintended consequences that broader legislation could lead to.
December 13, 2017
Commissioner Brendan Carr on Title II and stifling Internet innovation
In 2015 after White House pressure, the FCC decided to take the radical step of classifying “broadband Internet access service” as a heavily-regulated Title II service. Title II was created for the AT&T long-distance monopoly and telegraph network and “promoting innovation and competition” is not its purpose. It’s ill-suited for the modern Internet, where hundreds of ISPs and tech companies are experimenting with new technologies and topologies.
Commissioner Brendan Carr was gracious enough to speak with Chris Koopman and me in a Mercatus podcast last week about his decision to vote to reverse the Title II classification. The podcast can be found at the Mercatus website. One highlight from Commissioner Carr:
Congress had a fork in the road. …In 1996, Congress made a decision that we’re going to head down the Title I route [for the Internet]. That decision has been one of the greatest public policy decisions that we’ve ever seen. That’s what led to the massive investment in the Internet. Over a trillion dollars invested. Consumers were protected. Innovators were free to innovate. Unfortunately, two years ago the Commission departed from that framework and moved into a very different heavy-handed regulatory world, the Title II approach.
Along those lines, in my recent ex parte meeting with Chairman Pai’s office, I pointed to an interesting 2002 study in the Review of Economics and Statistics from MIT Press about the stifling effects of Title II regulation:
[E]xisting economics scholarship suggests that a permissioned approach to new services, like that proposed in the [2015] Open Internet Order, inhibits innovation and new services in telecommunications. As a result of an FCC decision and a subsequent court decision in the late 1990s, for 18 to 30 months, depending on the firm, [Title II] carriers were deregulated and did not have to submit new offerings to the FCC for review. After the court decision, the FCC required carriers to file retroactive plans for services introduced after deregulation.
This turn of events allowed economist James Preiger to analyze and compare the rate of new services deployment in the regulated period and the brief deregulated period. Preiger found that “some otherwise profitable services are not financially viable under” the permissioned regime. Critically, the number of services carriers deployed “during the [deregulated] interim is 60%-99% larger than the model predicts they would have created” when preapproval was required. Finally, Preiger found that firms would have introduced 62% more services during the entire study period if there was no permissioned regime. This is suggestive evidence that the Order’s “Mother, May I?” approach will significantly harm the Internet services market.
Thankfully, this FCC has incorporated economic scholarship into its Restoring Internet Freedom Order and will undo the costly Title II classification for Internet services.
November 30, 2017
3 Reforms to Translate Permissionless Innovation into Public Policy
Over at Plain Text, I have posted a new essay entitled, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” It’s a preliminary sketch of some reform ideas that I have been working on as part of my next book project. The goal is to find some creative ways to move the ball forward on the innovation policy front, regardless of what level of government we are talking about.
To maximize the potential for ongoing, positive change and create a policy environment conducive to permissionless innovation, I argue that policymakers should purse policy reforms based on these three ideas:
The Innovator’s Presumption: Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.
The Sunsetting Imperative: Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.
The Parity Provision: Any operator offering a similarly situated product or service should be regulated no more stringently than its least regulated competitor.
These provisions are crafted in a somewhat generic fashion in the hope that these reform proposals could be modified and adopted by various legislative or regulatory bodies. If you are interested in reading more details about each proposal, jump over to Plain Text to read the entire essay.
November 8, 2017
Amended SESTA Clears Committee: What’s Changed So Far and How It Impacts Section 230
As I have previously written about, a bill currently up for debate in Congress runs the risk of gutting critical liability protections for internet intermediaries. Earlier today the Stop Enabling Sex Traffickers Act passed out of committee with an amendment attempted to remedy some of the most damaging changes to Section 230 in the original act. While this amendment has gained support from some industry groups, it does not fully address the concerns regarding changes to intermediary liability under Section 230. While the amended version shows increased awareness of the far reaching consequences of the act, it does not fully address issues that could have a chilling effect on speech on the internet and risk stifling future internet innovation.
Good Samaritan Provision
As Eric Goldman points out, the amended version expressly retains part the Good Samaritan provisions of Section 230 for removals, but it still enables new liability for user publication. As a result, the new amendment only partially preserves Good Samaritan mechanisms and does not fully address concerns about good faith attempts to avoid the new liability.
Knowledge Standard
The amended version that cleared committee clarifies the knowledge standard of the earlier bill by stating that for liability to attach the intermediary the intermediary must have participated by “knowingly assisting, supporting, or facilitating a violation.” This improves but does not fully mitigate the damage that establishing new liability could do to internet free speech. As EFF writes, facilitates legally means “to make easier or less difficult” and would include a huge swath of innocuous products, websites, and activities.
This standard is particularly dangerous for online dating and messaging services. For example, if a traffickers used a messaging service to communicate this could be seen as facilitating because it made it easier to communicate. Dating services which set up meetings could also be seen as facilitators if a bad actor used their service to conduct human trafficking. As Mike Masnick at TechDirt argues, there is little certainty in the amended version of what “knowingly” means and it may be as low a standard as general knowledge or media reports that your website was at some point used (or allegedly used) by sex traffickers.
The Retroactivity Provision
The new version does not clear up concerns about retroactivity. In both the amended and original versions, the bill states that it applies “regardless of whether the conduct alleged occurred, or is alleged to have occurred, before, on or after such date of enactment.” As a result companies are open to civil and criminal liability for conduct that did not have such liability when it occurred.
While the amended SESTA signals a recognition that the bill needs to more narrowly tailored, it leaves internet intermediaries with the same two choices if enacted.
The first option for intermediaries would be to engage in an aggressive takedown process like they do for copyright claims under the DMCA. This reaction is to take down questioned content first and ask whether it should have been taken down later. As Masnick notes, however, the DMCA has much clearer provisions for when content must be taken, but still there are seemingly rampant issues with false claims. Such a situation would only be worse under SESTA especially for social media, search engines, and dating websites. Some websites might choose to quit operating rather than engage in the high level of moderation that would be necessary. Because the bill applies to all sizes of companies without any limitations for the amount staff or users, this is more likely to have a negative impact on smaller or more innovative services that might one day become the next Facebook or Google. These companies do not have the same manpower to engage in an aggressive monitoring for their user base and as a result may have more difficulty entering the market if there are greater compliance burdens. Those that did continue would only be those who could afford to devote large number of staff and legal resources to monitoring and determining the accuracy of claims.
The second option is to avoid the cooperation and self-monitoring that websites engage in now. In a recent interview regarding Russian election ads, Senate Majority leader Mitch McConnell stated that tech should be “more interested in cooperating with law enforcement.” SESTA, however, provides the opposite incentive. Cooperation in investigations would show knowledge and open the intermediary up to further civil liability. As a result, intermediaries might be discouraged from future cooperation.
The amended version will now head to the Senate floor for debate. The bill has a noble goal of making sex trafficking more difficult and this revised version shows progress towards protecting intermediary liability. Still, it addresses the issue more broadly than needed and risks fundamentally changing the internet.
October 18, 2017
Are autonomous vehicles already starting to disrupt the auto insurance market?
Tesla, Volvo, and Cadillac have all released a vehicle with features that push them beyond the standard level 2 features and nearing a level 3 “self-driving” automation system where the driver is still needs to be there, but the car can do most of the work. While there have been some notable accidents, most of these were tied to driver errors or behavior and not the technology. Still autonomous vehicles hold the promise of potentially reducing traffic accidents by more than 90% if widely adopted. However, fewer accidents and a reduction in the potential for human error in driving could change the function and formulas of the auto insurance market.
Tesla’s semi-autonomous autopilot has been in the market for over the year and insurance companies have responded to the new technology in various ways. Some insurers, like the innovative company Root, have recognized the safety benefits of even semi-autonomous technology and offered drivers of such vehicles a discount on their insurance premiums for its use. Others, however, are likely charging higher rates pointing to a 2017 AAA recommendation stating that even with the enhanced autopilot Tesla owners have filed more claims and those claims tend to be more expensive. More generally insurers seem to have not factored in the potential benefits of semi-autonomy into their rates yet and instead focus on the costs of the vehicles and repairs. But as more cars with such semi-autonomous features hit the road, consumers will likely demand new products. As cars become safer, the insurance market is likely to shrink with at least one report estimating a reduction of 40% of accidents if even only the current level of Autopilot was widely adopted.
Since most states require drivers to carry insurance on their vehicles, it will be necessary for the insurance market to provide products in order for widespread adoption of driverless cars to occur. At the same time insurance typically regulated at a state level allowing widespread experimentation to occur before national norms emerge. This widespread insurance requirement is different from other disruptive technologies, such as artificial intelligence or 3-D printing, and therefore for widespread adoption to be possible the insurance market or its related government policies must adapt sooner rather than later.
It is becoming increasingly clear that autonomous vehicles will disrupt the insurance industry even before they become the majority of vehicles on the road. As a June report from KPMG noted the acceleration of autonomous technology has progressed more rapidly than anticipated and as a result the auto insurance industry may arrive in a chaotic middle sooner than anticipated. If the industry fails to adapt, the report notes, the auto insurance sector could shrink by almost $137 billion. Some of the earliest adaptations will still rely on individuals who own the car, but as the technology is likely to change car ownership entirely new insurance policies and coverage will need to evolve. If auto insurance companies choose like many today not to adapt to these changes, they may find themselves displayed by a new industry that does.
Innovators recognize that autonomous vehicles may disrupt the existing auto insurance market and in some cases are seeking partners to determine how to develop policies to embrace and encourage acceptance of the new product. Working directly with innovators is likely to allow insurers to offer the products needed to allow both widespread adoption of the new technology and create or maintain competitive policies as the auto insurance sector changes. Such collaboration is probably most useful in the ride-sharing space where the lines between corporate and individual owners can be blurry.
Another way to potentially disrupt the insurance market will be for the manufacturer to actually offer or hold the insurance on the vehicle. Tesla is already trying such an all in one price in some Asian markets. Tying insurance in with the cost of the vehicle would skip the middle man, but also may offer less choice to consumers. Still such options are likely to be more appealing when the technology involved and not humans are likely the risk being insured.
These early attempts to respond to semi-autonomous vehicles show that some stakeholders are already trying to avoid a chaotic middle period. Just as with technology itself, the accompanying insurance market will likely have to go through experimental design and trial and error before arriving at a new market equilibrium. If the auto insurance industry does not respond to increasingly autonomous vehicles, it runs the risk of negative consequences including:
As previously discussed, the industry itself would shrink significantly;
A lack of flexibility and adaptability further creates chaotic middle where courts, consumers, insurers, and regulators are uncertain of who and what is covered and where the responsibility for any damages should lie;
Missing the opportunity to expand their services into new technologies or create new products that respond to consumer preferences in light of these technological changes.
It will be interesting to watch if the current players in the auto insurance market are able to overcome and adapt to these challenges or if new entrepreneurial disruptors emerge along with the new technology.
October 16, 2017
Why the FCC silence on NBC license challenges? Other priorities, and it’s not up to them.
Broadcast license renewal challenges have troubled libertarians and free speech advocates for decades. Despite our efforts (and our law journal articles on the abuse of the licensing process), license challenges are legal. In fact, political parties, prior FCCs, and activist groups have encouraged license challenges based on TV content to ensure broadcasters are operating in “the public interest.” Further, courts have compelled and will compel a reluctant FCC to investigate “news distortion” and other violations of FCC broadcast rules. It’s a troubling state of affairs that has been pushed back into relevancy because FCC license challenges are in the news.
In recent years the FCC, whether led by Democrats or Republicans, has preferred to avoid tricky questions surrounding license renewals. Chairman Pai, like most recent FCC chairs, has been an outspoken defender of First Amendment protections and norms. He opposed, for instance, the Obama FCC’s attempt to survey broadcast newsrooms about their coverage. He also penned an op-ed bringing attention to the fact that federal NSF funding was being used by left-leaning researchers to monitor and combat “misinformation and propaganda” on social media.
The silence of the Republican commissioners today about license renewals is likely primarily because they have higher priorities (like broadband deployment and freeing up spectrum) than intervening in the competitive media marketplace. But second, and less understood, is because whether to investigate a news station isn’t really up to them. Courts can overrule them and compel an investigation.
Political actors have used FCC licensing procedures for decades to silence political opponents and unfavorable media. For reasons I won’t explore here, TV and radio broadcasters have diminished First Amendment rights and the public is permitted to challenge their licenses at renewal time.
So, progressive “citizens groups” even in recent years have challenged license renewals for broadcasters for “one-sided programming.” Unfortunately, it works. For instance, in 2004 the promises of multi-year renewal challenges from outside groups and the risk of payback from a Democrat FCC forced broadcast stations to trim a documentary critical of John Kerry from 40 minutes to 4 minutes. And, unlike their cable counterparts, broadcasters censor nude scenes in TV and movies because even a Janet Jackson Superbowl scenario can lead to expensive license challenges.
These troubling licensing procedures and pressure points were largely unknown to most people, but, on October 11, President Trump tweeted:
“With all of the Fake News coming out of NBC and the Networks, at what point is it appropriate to challenge their License? Bad for country!”
With all of the Fake News coming out of NBC and the Networks, at what point is it appropriate to challenge their License? Bad for country!
— Donald J. Trump (@realDonaldTrump) October 11, 2017
So why hasn’t the FCC said they won’t investigate NBC and other broadcast station owners? It may be because courts can compel the FCC to investigate “news distortion.”
This is exactly what happened to the Clinton FCC. As Melody Calkins and I wrote in August about the FCC’s news distortion rule:
Though uncodified and not strictly enforced, the rule was reiterated in the FCC’s 2008 broadcast guidelines. The outline of the rule was laid out in the 1998 case Serafyn v. CBS, involving a complaint by a Ukrainian-American who alleged that the “60 Minutes” news program had unfairly edited interviews to portray Ukrainians as backwards and anti-Semitic. The FCC dismissed the complaint but DC Circuit Court reversed that dismissal and required FCC intervention. (CBS settled and the complaint was dropped before the FCC could intervene.)
The commissioners might personally wish broadcasters had full First Amendment protections and want to dismiss all challenges but current law permits and encourages license challenges. The commission can be compelled to act because of the sins of omission of prior FCCs: deciding to retain the news distortion rule and other antiquated “public interest” regulations for broadcasters. The existence of these old media rules mean the FCC’s hands are tied.
October 10, 2017
A Guide on Breaking Into Technology Policy
In recent months, I’ve come across a growing pool of young professionals looking to enter the technology policy field. Although I was lucky enough to find a willing and capable mentor to guide me through a lot of the nitty gritty, a lot of these would-be policy entrepreneurs haven’t been as lucky. Most of them are keen on shifting out of their current policy area, or are newcomers to Washington, D.C. looking to break into a technology policy career track. This is a town where there’s no shortage of sage wisdom, and while much of it still remains relevant to new up-and-comers, I figured I would pen these thoughts based on my own experiences as a relative newcomer to the D.C. tech policy community.
I came to D.C. in 2013, originally spurred by the then-recent revelations of mass government surveillance revealed by Edward Snowden’s NSA leaks. That event led me to the realization that the Internet was fragile, and that engaging in the battle of ideas in D.C. might be a career calling. So I packed up and moved to the nation’s capital, intent on joining the technology policy fray. When I arrived, however, I was immediately struck by the almost complete lack of jobs in, and focus on, technology issues in libertarian circles.
Through a series of serendipitous and fortuitous circumstances, I managed to ultimately break into a field that was still a small and relatively under-appreciated group. What we lacked in numbers and support we had to make up for in quality and determined effort. Although the tech policy community has grown precipitously in recent years, this is still a relatively niche policy vocation relative to other policy tracks. That means there’s a lot of potential for rapid professional growth—if you can manage to get your foot in the door.
So if you’re interested in breaking into technology policy, here are some thoughts that might be of help.
Adapting to the Shifting Sands
My own mentor, Mercatus Senior Fellow Adam Thierer, wrote what I consider the defining guide to breaking into the technology policy arena. Before jumping into the depths of policy, I used his insights in that article to help wrap my head around the ins-and-outs of this field. The broad takeaway is that you should learn from those who came before you. Intellectual humility is important in any profession, and tech policy is no different. Even in this still-young and growing field, there’s an exceptionally robust body of work that is worth parsing through. That means, first and foremost, reading. A lot.
Many of these pieces are going to touch on a broad range of disciplines. Law review articles, technical analyses, regulatory comments, and economic research play an important role in informing the many and varied debates in the tech policy field. While a degree in law or economics isn’t a prerequisite for working in this space, you’ll definitely need to do your homework. Having an understanding of the interdisciplinary work being done in tech policy can be the difference between a good analyst and a great analyst.
Distinguishing yourself in the field also requires embracing the inherent dynamism of this issue space. Things can change a lot, and quickly. The rate of technological change in the modern era is rapid and unceasing—changes that are reflected in the policy arena. If you’re going to keep up with the pace, you’ll not only have to consistently read (a lot), you’ll have to be passionate about the learning. For some, that may be daunting; for those who live for perpetual motion in policy, it can be exciting and energizing. If you’re uncomfortable with that level of dynamism and prefer something a bit more certain and steady, then this probably isn’t the career track for you.
If you yearn for the constantly shifting sands, however, then you’re going to have to read, read, read, and then read some more.
Once you’ve done the reading, you’ll have to start thinking about how, or whether, you want to specialize. Adam notes this explicitly in his piece: specialization matters. I tend to agree. However, what you decide to specialize in is less straightforward. Because this field is ever-changing, the opportunities for specialization are also changing, with a lot of issues intermingling with one another and blurring the lines of previously distinct areas.
Telecommunications, for example, is technically an area of specialization for tech policy. However, even that category has become quite broad and now very often overlaps with newer emerging technology issues. As an example, working on spectrum issues—previously the purview of analysts looking at the traditional media marketplace (television, radio, etc.)—now involves a host of other non-telecommunications issues, such as autonomous and connected vehicles, small microcube satellite constellations delivering Internet service, low-altitude commercial drone traffic management, and much more. Specialization just isn’t what it used to be, and as the policy landscape continues to change relative to the emergence of new technologies, would-be tech policy analysts will need to be flexible and adaptive in considering what issues merit engagement.
In short, read with an eye towards specializing, but be prepared to adapt when things change; and when they inevitably do, get ready to read some more and specialize anew.
Understanding the Political Landscape
You may already have strongly-held political opinions. Then again, maybe not. Either way, it’s important to understand the who’s who of this space, where they come down on their philosophical approaches to technology governance, and how each ideological tribe thinks about the issues. Because tech policy doesn’t elicit the same type of partisanship more commonly associated with traditional issues like health policy and labor policy, you may be surprised to discover who your common bedfellows are.
There are some issue-specific exceptions to this. The debate over Net Neutrality comes to mind as a particularly controversial flashpoint, largely divided down partisan lines. In general, however, there’s relatively little hyper-partisanship in technology policy debates. Technological progress and innovation are generally viewed positively across the political spectrum. As a result, the discussions surrounding issues like AI, autonomous vehicles, and other emerging technologies seldom involve disagreement over whether such advances should be permitted—though again, there are exceptions—and instead boil down to issues related to the specific regulations that will govern their deployment. Ultimately, the discourse tends to gravitate towards the political center and disagreements are largely confined to issues over regulatory governance: the variety (what types of rules), source (who governs), and magnitude (how restrictive or permissive) of regulations. To figure out where your sympathies lie, you’ll first need to make sense of the political terrain by identifying the major players in technology policy circles.
To that end, I definitely suggest you take a look at this great landscape analysis from Rob Atkinson, the president of the Information Technology and Innovation Foundation. Rob classifies the tech policy crowd into 8 camps:
Cyber-Libertarians believe the Internet can get along just fine without the nations, institutions, and other “weary giants of flesh and steel” of the pre-Internet world;
Social Engineers are proponents of the Internet’s promise as an educational and communications tool, but tend to belie its economic benefits;
Free Marketers believe in the Internet’s power as a liberating force for markets and individuals, and are generally skeptical of government involvement;
Moderates are “staunchly and unabashedly” in favor of technological developments, but are supportive of government involvement in promoting and accelerating these developments;
Moral Conservatives tend to view the Internet and emerging technologies as nefarious dens of vice that are accelerating the decline of traditional cultural norms and etiquette, and are supportive of government efforts to reverse that decline; and
Old Economy Regulators don’t believe there is anything unique about these new technological tools, and believe restrictive pre-Internet regulatory frameworks can work just as well when applied to these new digital technologies.
Rob also ropes in the “Tech Companies and Trade Associations” and “Bricks and Mortars” groups, but I leave these aside as they tend to fall slightly outside the traditional policy analysis space associated with nonprofits, academic institutions, and advocacy groups. Going by Rob’s classification, I used to throw oscillate between associating with the “Cyber-Libertarian” and “Free marketers” tribes. In recent years, however, I’ve come to move quite solidly into the “Moderate” camp.
Wherever you think you fall, be sure not to ignore the work of “non-aligned” organizations and individuals—the best tech policy analysts are those who know both sides of a debate inside and out. Getting to know the major dividing lines between these groups is key to understanding the nuances involved in tech policy debates, and Rob’s piece is an excellent starting point for newcomers to get a sense of where these disagreements rest.
Framing the Issues
As discussed previously, one of the defining characteristics of this policy field is its dynamic nature. An issue you thought you had nailed down on Monday could be completely flipped on its head by Friday. That’s why it’s so important to consider how you think about these issues. A general framework or taxonomy will help, and different analysts think about these issues differently.
For example, some people look at technology issues through the lens of privacy; others, through the lens of cybersecurity. Personally, I think that single-issue lenses tend to miss the fundamentally multi-faceted nature of this issue space. That’s why I look at tech policy through not a lens, but a kaleidoscope, with each emerging technology presenting unique privacy, cybersecurity, safety, regulatory, and economic challenges and benefits.
All emerging technologies present balancing concerns between these equities. Autonomous vehicles will undoubtedly save lives, but may present greater concerns for privacy and cybersecurity. Commercial drones could likely decrease the costs for delivering goods or open up a renaissance in air transportation, but regulatory barriers and safety concerns present formidable obstacles to adoption. In short, I don’t think there’s any one “lens” through which it’s best to see these technologies. How you decide to approach an issue should ultimately be governed by how you balance the many tradeoffs associated with a new technology, and whether you prefer to use a “lens” or a “kaleidoscope.”
At the Niskanen Center, that “kaleidoscope” approach involves employing a framework that touches on four general issue “buckets”: Regulatory Governance, Emerging Technologies, the Digital Economy, and Cyber Society.
“Regulatory Governance” focuses on an examination of how rules and regulations can manage new emerging technologies. This bucket informs our basic principles and overarching perspective on technology policy (best encapsulated as support for a “soft law” regime), and directly informs our engagement on specific “Emerging Technologies,” such as genomics, AI, autonomous vehicles, and other emerging technologies.
The other two buckets—”The Digital Economy” and “Cyber Society”—involve areas in which there is a much greater degree of overlap and intermingling (copyright, “Future of Work” issues, online free speech, digital due process, government surveillance, etc.). These are areas where the lines between tech policy and other, more traditional policy work are much “fuzzier.” This leads us to an important point worth addressing if you’re thinking about jumping into this field: what is, and is not, tech policy?
Thinking About What Isn’t Tech Policy
Different analysts and scholars will disagree about the contours here, so I’ll caveat my thoughts on the “not-tech policy” space by noting that these are purely my own biases. What I consider “tech policy” will probably differ from what other individuals and organizations would group under that header. A lot can be said here, so I’ll just focus on one particular area that is often grouped under the tech policy banner, but which I would not consider tech policy proper: the gig economy.
Take Uber. Uber is a smartphone app. In that sense, it’s technology. However, the issues affected by its use are more relevant to labor, tax, welfare, and traditional regulatory policy analysis—the role of contract work in society, tax classification for part-time laborers, portability of benefits, and barriers to market entry, for example. Although the regulatory component is definitely an issue related to tech policy, it’s not clear that the regulatory issues are technology-specific. This makes for reasonable disagreement about whether gig economy issues, which would also include services like Airbnb and TaskRabbit, are appropriately classified as primarily technology policy.
Ultimately, I see the gig economy as an area that is fundamentally about connecting unused or under-utilized capital to higher-value uses (in the case of Uber, connecting vehicles that would otherwise remain idle with passengers looking for transportation services). While the underlying technology that makes much of the gig economy possible (smartphone apps and digital communications technology) gives the appearance that these issues are actually about technology, the real policy implications are less technology-specific than other areas of tech policy, such as AI, the Internet of Things, autonomous vehicles, and commercial drones.
That having been said, there’s plenty of cases to be made for tech policy to include the gig economy. The takeaway here, however, is that technology is literally eating the modern world, and pretty much all traditional policy spaces are now, in some respect, intertwined with tech policy. As such, we have to draw a dividing line somewhere, otherwise “technology policy” loses any sort of substantive meaning as a distinct field of study.
So if you’re thinking about a career in tech policy broadly, but have a particular interest in, e.g., gig economy issues, it’s worth asking what precisely draws you to the issue. If you’re primarily interested in its impact on labor markets, taxes, or regulatory barriers, then tech policy might not be what you had in mind.
Next Steps
So after you’ve read a bit, focused in on an area of interest, developed a sense of the lay of the political landscape, and put some thought into how you think about framing your analytical approach, what next? Eli Dourado, formerly the director of the Technology Policy Program at Mercatus and now the head of global policy and communications at Boom, offered some succinct thoughts on actually getting involved in this field.
“First, get started now.”
Just start doing technology policy.
Write about it every day. Say unexpected things; don’t just take a familiar side in a drawn-out debate. Do something new. What is going to be the big tech policy issue two years from now? Write about that. Let your passion show.
The tech policy world is small enough — and new ideas rare enough — that doing this will get you a following in our community.
“Second, get in touch.”
These are both great pieces of advice. If you’re really interested in jumping into tech policy, then you’re going to want to start writing. Read as much as you can and get up to speed on the issues that interest you. Then start blogging and editorializing your thoughts. These days, the costs of starting your own blog are primarily just your time and effort, and there are plenty of easy-to-use and free services out there that you can take advantage of.
Once you’ve started writing, start connecting with a wider audience via Twitter, Facebook, and other social media platforms. But don’t limit yourself to the venue of cyberspace forums. Reach out to established analysts by email and get their thoughts and feedback. Networking is key, and if you’re not doing it, you’re not doing half the work. You might have the greatest tech policy thoughts since Marc Andreessen wrote Software is Eating the World (which, incidentally, you should also add to your reading list), but if no one is reading your work, it doesn’t really matter. Just as you need to read, read, read, so too should you network, network, network, and then network some more.
Reach out, and get in touch with people in the field—especially those of us in D.C. If you’re serious about your craft and you’re putting in the time and effort to position yourself as a young tech policy professional, there are plenty of us who are more than happy to have a conversation with you. Indeed, like a lot of people in this field, I couldn’t have made it to where I am if not for the willingness of more established professionals like Adam taking the time to chat with me.
So reach out, network, and engage with those scholars and analysts whose work you follow. A casual conversation could very easily be the beginning of a new career in tech policy.
Concluding Thoughts
So if after reading all that you’re still considering a career in tech policy, here are some final thoughts for consideration.
First, be open to the possibility that you may be wrong.
Tech policy debates involve a lot of nuance, but there’s also a lot of surprising agreement. Given the constant evolution of technology, at some point you’ll undoubtedly be confronted with a scenario in which you need to reassess your priors. (I’ve had to learn this lesson the hard way on the issue of surveillance. Just take a look at some of my writings earlier in my career and compare them with more recent pieces.) You shouldn’t constantly sway with the winds of compromise, but nor should you see every policy battle as a hill worth dying on.
Second, there’s no such thing as too much reading or networking.
This is worth reiterating, over and over, because it’s important, and there’s no shortcut here. There’s always more to read to get up to speed on tech issues, and chances are you’ll never know it all. So read, read, read, and when you’ve had enough of reading, try switching it up with some outreach and networking. There’s a fair number of people working in tech policy, but it’s still a relatively small, close-knit community. Once you meet a handful of people, it’s easy enough to catapult yourself to introductions to the rest of us. Jobs in tech policy, especially in D.C., are still tough to come by, but it’s a growing field, and the more people you know, the more likely you’ll be well-positioned to take advantage of opportunities.
Finally, have something to say.
This point is worth an anthology all its own, and cannot be over-emphasized: don’t be a policy parrot. Have something to say—not just something to say, but something new and unique. That counts doubly for having actual policy solutions. There’s plenty of people who default to the “let’s have a conversation” school of thought—don’t be one of them. Your job as an analyst is to parse the details of a contentious issue and apply your expertise to provide real, actionable recommendations on the appropriate course of action. Have real recommendations and actual solutions and you’ll set yourself apart from the run-of-the-mill tech policy analyst. Always remember: the difference between doing something right and doing nothing at all, is doing something half-assed. Don’t be the half-assed tech policy parrot.
Don’t get discouraged; establishing your brand takes time. But if you’re serious about giving tech policy a go and you put in the effort, there will be opportunities to make a name for yourself. So read, write, reach out, and offer something unique to the discussion. If you can do that, the sky’s the limit.
September 25, 2017
new Mercatus paper on “Public Policy for Virtual and Augmented Reality”
The Mercatus Center at George Mason University has just released a new paper on,”Permissionless Innovation and Immersive Technology: Public Policy for Virtual and Augmented Reality,” which I co-authored with Jonathan Camp. This 53-page paper can be downloaded via the Mercatus website, SSRN or Research Gate.
Here is the abstract for the paper:
Immersive technologies such as augmented reality, virtual reality, and mixed reality are finally taking off. As these technologies become more widespread, concerns will likely develop about their disruptive social and economic effects. This paper addresses such policy concerns and contrasts two different visions for governing immersive tech going forward. The paper makes the case for permissionless innovation, or the general freedom to innovate without prior constraint, as the optimal policy default to maximize the benefits associated with immersive technologies.
The alternative vision — the so-called precautionary principle — would be an inappropriate policy default because it would greatly limit the potential for beneficial applications and uses of these new technologies to emerge rapidly. Public policy for immersive technology should not be based on hypothetical worst-case scenarios. Rather, policymakers should wait to see which concerns or harms emerge and then devise ex post solutions as needed.
To better explain why precautionary controls on these emerging technologies would be such a mistake, Camp and I provide an inventory of the many VR, AR, and mixed reality applications that are already on the market–or soon could be–and which could provide society with profound benefits. A few examples include:
Education and museums. Immersing users in virtual environments allows Google’s Expedition Pioneer Program to provide 360-degree video tours of famous landmarks and ruins, and museums are already using AR technology to provide interactive content.
Worker training and systems monitoring. VR industrial simulators such as ForgeFX are being used to train workers to master a variety of complex tasks, while AR systems can be leveraged to help farmers with crop management from afar.
Healthcare. CT scans and MRIs are being converted into 3-D models to perform surgery that was once thought impossible, and the world’s first VR medical training facility opened in London in November of 2016.
Engineering. Virtual modeling technology is being combined with VR to allow touring of unbuilt vehicles and buildings, lowering the costs of construction and design.
Military. The military has used VR for combat simulations, medic training, flight simulators, vehicle simulators, and even the treatment of PTSD.
And that just scratches the surface of some of the many exciting applications out there. The virtual sky is the limit with immersive tech — so long, that is, as we don’t derail these life-enriching technologies with misguided, fear-based public policy restrictions. Please read the paper for more details.
September 20, 2017
What is “broadband” speed and why does it matter?
Internet regulation advocates are trying to turn a recent FCC Notice of Inquiry about the state of US telecommunications services into a controversy. Twelve US Senators have accused the FCC of wanting to “redefin[e] broadband” in order to “abandon further efforts to connect Americans.”
Considering Chairman Pai and the Commission are already considering actions to accelerate the deployment of broadband, with new proceedings and the formation of the Broadband Deployment Advisory Committee, the allegation that the current NOI is an excuse for inaction is perplexing.
The true “controversy” is much more mundane–reasonable people disagree about what congressional neologisms like “advanced telecommunications capability” mean. The FCC must interpret and apply the indeterminate language of Section 706 of the Telecommunications Act, which requires the FCC about whether to determine “whether advanced telecommunications capability is being deployed in a reasonable and timely fashion.” If the answer is negative, the agency must “take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market.” The inquiry is reported in an annual “Broadband Progress Report.” Much of the “scandal” of this proceeding is confusion about what “broadband” means.
What is broadband?
First: what qualifies as “broadband” download speed? It depends.
The OECD says anything above 256 kbps.
ITU standards set it at above 1.5 Mbps (or is 2.0 Mbps?).
In the US, broadband is generally defined as a higher speed. The USDA’s Rural Utilities Service defines it as 4.0 Mbps.
The FCC’s 2015 Broadband Progress Report found, as Obama FCC officials put it, that “the FCC’s definition of broadband” is now 25 Mbps. This is why advocates insist “broadband access” includes only wireline services above 25 Mbps.
But in the same month, the Obama FCC determined in the Open Internet Order that anything above dialup speed–56 kbps–is “broadband Internet access service.”
So, according to regulation advocates, 1.5 Mbps DSL service isn’t “broadband access” service but it is “broadband Internet access service.” Likewise a 30 Mbps 4G LTE connection isn’t a “broadband access” service but it is “broadband Internet access service.”
In other words, the word games about “broadband” are not coming from the Trump FCC. There is no consistency for what “broadband” means because prior FCCs kept changing the definition, and even use the term differently in different proceedings. As the Obama FCC said in 2009, “In previous reports to Congress, the Commission used the terms ‘broadband,’ ‘advanced telecommunications capability,’ and ‘advanced services’ interchangeably.”
Instead, what is going on is that the Trump FCC is trying to apply Section 706 to the current broadband market. The main questions are, what is advanced telecommunications capability, and is it “being deployed in a reasonable and timely fashion”?
Is mobile broadband an “advanced telecommunications capability”?
Previous FCCs declined to adopt a speed benchmark for when wireless service satisfies the “advanced telecommunications capability” definition. The so-called controversy is because the latest NOI revisits this omission in light of consumer trends. The NOI straightforwardly asks whether mobile broadband above 10 Mbps satisfies the statutory definition of “advanced telecommunications capability.”
For that, the FCC must consult the statute. Such a capability, the statute says, is technology-neutral (i.e. includes wireless and “fixed” connections) and “enables users to originate and receive high-quality voice, data, graphics, and video telecommunications.”
Historically, since the statute doesn’t provide much precision, the FCC has examined subscription rates of various broadband speeds and services. From 2010 to 2015, the Obama FCCs defined advanced telecommunications capability as a fixed connection of 4 Mbps. In 2015, as mentioned, that benchmark was raised 25 Mbps.
Regulation advocates fear that if the FCC looks at subscription rates, the agency might find that mobile broadband above 10 Mbps is an advanced telecommunications capability. This finding, they feel, would undermine the argument that the US broadband market needs intense regulation. According to recent Pew surveys, 12% of adults–about 28 million people–are “wireless only” and don’t have a wireline subscription. Those numbers certainly raise the possibility that mobile broadband is an advanced telecommunications capability.
Let’s look at the three fixed broadband technologies that “pass” the vast majority of households–cable modem, DSL, and satellite–and narrow the data to connections 10 Mbps or above.*
Home broadband connections (10 Mbps+)
Cable modem – 54.4 million
DSL – 11.8 million
Satellite – 1.4 million
It’s hard to know for sure since Pew measures adult individuals and the FCC measures households, but it’s possible more people have 4G LTE as home broadband (about 28 million adults and their families) than have 10 Mbps+ DSL as home broadband (11.8 million households).
Subscription rates aren’t the end of the inquiry, but the fact that millions of households are going mobile-only rather than DSL or cable modem is suggestive evidence that mobile broadband offers an advanced telecommunications capability. (Considering T-Mobile is now providing 50 GB of data per line per month, mobile-only household growth will likely accelerate.)
Are high-speed services “being deployed in a reasonable and timely fashion”?
The second inquiry is whether these advanced telecommunications capabilities “are being deployed in a reasonable and timely fashion.” Again, the statute doesn’t give much guidance but consumer adoption of high-speed wireline and wireless broadband has been impressive.
So few people had 25 Mbps for so long that the FCC didn’t record it in its Internet Access Services reports until 2011. At the end of 2011, 6.3 million households subscribed to 25 Mbps. Less than five years later, in June 2016, over 56 million households subscribed. In the last year alone, fixed providers extended 25 Mbps or greater speeds to 21 million households.
The FCC is not completely without guidance on this question. As part of the 2008 Broadband Data Services Improvement Act, Congress instructed the FCC to use international comparisons in its Section 706 Report. International comparisons also suggest that the US is deploying advanced telecommunications capability in a timely manner. For instance, according to the OECD the US has 23.4 fiber and cable modem connections per 100 inhabitants, which far exceeds the OECD average, 16.2 per 100 inhabitants.**
Anyways, the sky is not falling because the FCC is asking about mobile broadband subscription rates. More can be done to accelerate broadband–particularly if the government frees up more spectrum and local governments improve their permitting processes–but the Section 706 inquiry offers little that is controversial or new.
*Fiber and fixed wireless connections, 9.6 million and 0.3 million subscribers, respectively, are also noteworthy but these 10 Mbps+ technologies only cover certain areas of the country.
**America’s high rank in the OECD is similar if DSL is included, but the quality of DSL varies widely and often doesn’t provide 10 Mbps or 25 Mbps speeds.
September 14, 2017
4 Ways Technology Helped During Hurricanes Harvey and Irma (and 1 more it could have)
Hurricanes Harvey and Irma mark the first time two Category 4 hurricanes have made U.S. landfall in the same year. Currently the estimates are the two hurricanes have caused between $150 and $200 million in damages.
If there is any positive story within these horrific disasters, it is that these events have seen a renewed sense of community and an outpouring of support from across the nation. From the recent star studded Hand-in-Hand relief concert and JJ Watts Twitter fundraiser to smaller efforts by local marching bands and police departments in faraway states.
What has made these disaster relief efforts different from past hurricanes? These recent efforts have been enabled by technology that was unavailable during past disasters, such as Hurricane Katrina.
Airbnb
Many people chose to evacuate once the paths and intensity of Hurricanes Irma and Harvey became clear. In fact, Hurricane Irma created the largest evacuation in US history. As a result, many hotels quickly filled.
Airbnb has been able to step in to allow local citizens to help in this situation by waiving its fees and encouraging owners to offer space free of charge to those displaced by the disasters. The website also makes it easy for evacuees to search and find available lodging. The service not only helps evacuees, but also volunteers and contractors coming to the area to help with recovery.
Additionally, the website was able to help authorities locate and communicate U.S. citizens who may have been in rented residences on Caribbean islands after the storm hit.
Licensing or other regulatory requirements could also limit what or which owners are able to offer in times of emergency preventing good Samaritans from being able to help. Regulations applying other lodging regulations, interpreting zoning laws, or outright bans on services like Airbnb could prevent this free service in the future. While Airbnb can waive its own fees, it would be unable to waive regulations from state or local governments allowing owners to offer their home. Often such regulations or enforcement attempts target hosts rather than companies like the zoning interpretation the city of Miami considered. If there are concerns about legality, individuals might be less likely to fill this void and help their neighbors or strangers through such services in times of crisis.
Drones
The Red Cross called for volunteer drone pilots who had the necessary paperwork and authorization to operate in the impacted areas and for the first time in a one week test used drones to deliver and survey disaster relief needs in some of the hardest hit areas.
But delivering supplies is not the only way drones are able to assist with recovery efforts. Verizon and AT & T were able to use drones to determine if equipment was damaged and causing outages, and then respond accordingly. Similarly some insurers have been deploying drones to allow adjusters to view and assess heavily damaged areas sooner.
In the immediate aftermath prohibited private drones from flying in areas around Houston, still the agency issued some permits allowing drones to assist in locating those who are trapped and survey the damage. There were many legal concerns to be considered in the initial aftermath and in the future use of drones including both property issues and concerns of interference. A less restrictive environment might have allowed drones to provide greater assistance sooner with a minimal risk of privacy invasion or interference.
Tesla
Tesla issued an over the air update for additional battery life (an upgrade that is normally available for a fee) to provide owners the ability to evacuate following the preferred route. While some may have concerns that this power could be used negatively by the corporation, the success shows that over-the-air updates could be used to improve safety or other features in the future.
Additionally, one of the issues in any evacuation is traffic. The more cars on the road (particularly as weather worsens), the greater the risk of accidents. Assuming there is not too much precautionary interference, in the future self-driving cars could aid in making evacuation traffic safer and less stressful.
Social media and messaging apps help connect neighbors and get help
Want help? There’s an app for that.
The Cajun Navy gained renown for rescuing neighbors in the Southern Louisiana floods, but the app Zello made becoming a member of it even easier during Hurricane Harvey. Similarly, the app allowed victims of the storms to share information as power went out using less bandwith then phone calls.
Traditional social media also played a role in search and rescue efforts. When 9-1-1 failed, those in need of help turned to Twitter and Facebook in some cases. Neighbors, friends, or even strangers could use the information to provide help when traditional responders were unavailable. So many people were relying on social media, the Coast Guard had to issue a comment requesting people call not tweet at them for rescue.
Social media certainly had problems with misinformation, but in recent disasters it has shown to be an important part of disaster response and preparedness.
The one that might have been….
Could Flytenow have provided a possible solution to some of the concerns of airline price-gouging in the wake of Hurricane Irma? Flytenow hoped to make flight sharing a reality for the masses, but was shutdown due to interpretations by the FAA regarding common carriers. There are limitations on flight sharing, however, in a crisis, it’s possible allowing this type of arrangement could have resulted in a greater number of flights available. If demand was high, available pilots planning their own evacuation might consider posting additional available seats for others in exchange for some share of the expense of the flight. The result likely would be more seats available and lower prices overall. Using a platform rather than a traditional bulletin board arrangement would allow a service to limit the availability to only those who are certified or otherwise shown to be competent to fly in difficult conditions. Perhaps Flytenow would even have provided some sort of good Samaritan program like Airbnb to help get flights to those most in need of evacuation. Still, because of regulatory precaution, at least for now, we will not know the potential impact flight sharing could have on assisting in such natural disasters.
Conclusion
Technology is changing the way we respond to disasters and assisting with relief efforts. As Allison Griswold writes at Quartz, this technology enabled response has redefined how people provide assistance in the wake of disaster. We cannot plan how such technology will react to difficult situations or the actions of such platforms users, but the recent events in Florida and Texas show it can enable us to help one another even more. The more technology is allowed to participate in a response, the better it enables people to connect to those in need in the wake of disaster.
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
