Adam Thierer's Blog, page 26

June 25, 2018

A Roundup of Commentary on the Supreme Court’s Carpenter v. United States Decision

On Friday, the Supreme Court ruled on Carpenter v. United States, a case involving the cell-site location information. In the 5 to 4 decision, the Court declared that “The Government’s acquisition of Carpenter’s cell-site records was a Fourth Amendment search.” What follows below is a roundup of reactions and comments to the decision. 


Ashkhen Kazaryan, Legal Fellow at TechFreedom, had this to say about the ruling:


This ruling recognizes the immensely sensitive nature of cell phone location data, and rightly requires a showing of probable cause before law enforcement can obtain location information from mobile carriers. Our country’s Founders would have expected no lesser safeguards to apply to non-stop surveillance. Indeed, the American Revolution was first instigated over surveillance that was far less invasive.


Ryan Radia at Competitive Enterprise Institute commended the decision:


Although the court’s opinion was narrowly crafted to address the particular facts in this case, its decision underscores the court’s willingness to apply rigorous scrutiny to governmental surveillance involving new technologies. In the United States, the Constitution protects people from unreasonable searches and seizures, and Fourth Amendment protection should apply to private information held on or collected through our personal devices.


Curt Levy, president of Committee for Justice, penned an op-ed in Fox News:


Rapid technological change inevitably outpaces the glacial evolution of the law and the Carpenter case is a perfect example. The location data in question was obtained under the Stored Communications Act (SCA), which did not require prosecutors to meet the “probable cause” standard of a warrant.


So Timothy Carpenter turned to the Constitution. But the Justice Department argued that the Fourth Amendment didn’t apply because of the Supreme Court’s Third-Party Doctrine. That doctrine holds that no search or seizure occurs when the government obtains data that the accused has voluntarily conveyed to a third party – in this case, one’s wireless provider.


The Third-Party Doctrine made some sense when it was invented 40 years ago. However, when applied to today’s modern technology, the doctrine results in a gaping hole in the Fourth Amendment…


The good news is that the Supreme Court took a big step towards repairing that hole Friday. In an opinion by Chief Justice John Roberts, the court acknowledged that Fourth Amendment doctrines must evolve to account for “seismic shifts in digital technology.”


Orin Kerr runs through nine questions you might have on the decision over at the Volokh Conspiracy:


(9) Does This Reasoning Apply Just For Physical Location Tracking, Or Does It Apply More Broadly?


That’s the big question. On one hand, the reasoning of the opinion is largely about tracking a person’s physical location. The opinion takes as a given that you have a reasonable expectation of privacy in the “whole” of your “physical movements.” The Court has never held that, so it’s sort of an unusual thing to just assume! But the Court seems to be getting it mostly from Justice Alito’s Jones concurrence, and the idea, as Alito wrote in Jones, that “society’s expectation has been that law enforcement agents and others would not— and indeed, in the main, simply could not—secretly monitor and catalogue every single movement of an individual’s car for a very long period.” …


On the other hand, there’s lots of language in the opinion that cuts the other way. Although the Court “decides no more than the case before us,” it also recasts a lot of doctrine in ways that could be used to argue for lots of other changes. Its use of equilibrium-adjustment will open the door to lots of new arguments about other records that are also protected. For example, what is the scope of this reasonable expectation of privacy in the “whole” of physical movements? Why is there? The Jones concurrences were really light on that, and Carpenter doesn’t do much beyond citing them for it: What is this doctrine and where did it come from? (And what other reasonable expectations of privacy in things do people have that we didn’t know about, and what will violate them?)


Cato’s Ilya Shapiro and Julian Sanchez comment on the Supreme Court’s decision in this Cato Daily podcast.


Columbia Law Professor Eben Moglen of the Software Freedom Law Center also opined on the decision:


The decision in Carpenter v. United States is a groundbreaking change in the application of the Fourth Amendment in digital society. By stating that the pervasive geographic location data assembled by cellular providers is not insulated from the warrant requirement even though it is information collected by third parties, the Court has fundamentally changed the principles underlying the application of the Amendment before today. The Court has stated that its present decision is narrow and factual, but a flood of further cases will seek to widen the meaning of today’s opinion.

 •  0 comments  •  flag
Share on Twitter
Published on June 25, 2018 06:08

June 22, 2018

A Roundup of Reactions to the Supreme Court’s Decision for Online Sales Tax

Yesterday, the Supreme Court dropped a decision in Wayfair v. South Dakota, a case on the issue of online sales tax. As always, the holding is key: “Because the physical presence rule of Quill is unsound and incorrect, Quill Corp. v. North Dakota, 504 U. S. 298, and National Bellas Hess, Inc. v. Department of Revenue of Ill., 386 U. S. 753, are overruled.” What follows below is a roundup of reactions and comments to the decision.


Joseph Bishop-Henchman at the Tax Foundation thinks this decision sets up a new political fight in Congress and in the states:


All eyes will now turn to Congress and the states. Congress has been stymied between alternate versions of federal solutions: the Remote Transactions Parity Act (RTPA) or Marketplace Fairness Act (MFA), which lets states collect if they agree to simplify their sales taxes, and a proposal from retiring Rep. Bob Goodlatte (R-VA) that would make the sales tax a business obligation rather than a consumer obligation, and have it collected based on the tax rate where the company is located but send the revenue to the jurisdiction where the customer is located. RTPA and MFA are more workable and more likely to pass, but Goodlatte controls what makes it to the House floor, so nothing has happened. Maybe today’s decision will change that.


Berin Szoka at TechFreedom noted:


For the last twenty-six years, the Internet has flourished because of the legal certainty created by Quill. Now, no retailer can know whether it must collect taxes, and smaller retailers face huge challenges. As Chief Justice Roberts notes, the majority ‘breezily disregards the costs that its decision will impose on retailers.’ The majority insists that software will fix the problem of calculating the correct state and local sales tax for every transaction, but with over 10,000 jurisdictions taxing similar products differently, the problem is nightmarishly complicated.


My colleague Doug Holtz-Eakin explains the tension:


What is the economic upshot of this decision? Certainly, it puts in-state and brick-and-mortar retailers on a level playing field with online sellers. In isolation, that is an improvement in the efficiency of the economy because people will shop based on the product and experience and not the tax consequences. Recall, however, that in many states a resident is liable for the “use tax” on her out-of-state purchases. If the sales tax is now being collected, it will be important for states either to drop the use tax or to make sure that there is no double taxation in some other way. If not, then the result of this decision will be less efficiency.


Another aspect of the decision is the impact on federalism and the notion of representation. The decision means that South Dakota can now dictate some of the business operations of firms that have no representation in the South Dakota legislature. Is that fair? Moreover, firms can no longer shop among states to find the sales tax regime that they like best — they will be subject to the same sales taxes across the country regardless of where they operate.


Grover Norquist at American for Tax Reform had this to say:


Today the Supreme Court said ‘yes—you can be taxed by politicians you do not elect and who act knowing you are powerless to object.’ This power can now be used to export sales taxes, personal and corporate income taxes, and opens the door for the European Union to export its tax burden onto American businesses—as they have been demanding…


We fought the American Revolution in large part to oppose the very idea of taxation without representation. Today, the Supreme Court announced, ‘oops’ governments can now tax those outside their borders—those who have no political power, no vote, no voice.


Adam Michel of the Heritage Foundation also focused on federalism at The Daily Signal:


The new status quo under Wayfair is untenable, creating a Wild West for state sales taxes. Some will point to seemingly easy solutions that have been promoted for decades. One example is the Remote Transactions Parity Act, sponsored by Rep. Kristi Noem, R-S.D.


Noem’s bill would maintain the new expanded power of state tax collectors, while imposing nominal limits and simplifications on states’ tax rules.


Such proposals that force sellers to track their sales to the consumer’s destination and comply with laws in other jurisdictions are fundamentally at odds with the principles of local government and American federalism.


Rob Port is concerned about the interstate commerce implications:


The purpose of the interstate commerce clause is to prevent the nightmare of fifty states squabbling with one another over trade wars between their constituent industries, or trying to exert political influence on one another. Congress, and not the states, is to regulate interstate commerce.


I feel like the Supreme Court, by overturning Quill and giving the states new powers to tax beyond their borders, has weakened interstate commerce protections and cracked open the lid to a real can of worms.

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2018 10:12

June 21, 2018

Mandating AI Fairness May Come At The Expense Of Other Types of Fairness

Two years ago, ProPublica initiated a conversation over the use of risk assessment algorithms when they concluded that a widely used “score proved remarkably unreliable in forecasting violent crime” in Florida. Their examination of the racial disparities in scoring has been cited countless times, often as a proxy for the power of automation and algorithms in daily life. Indeed, as the authors concluded, these scores are “part of a part of a larger examination of the powerful, largely hidden effect of algorithms in American life.”


As this examination continues, two precepts are worth keeping in mind. First, the social significance of algorithms needs to be considered, not just their internal model significance. While the accuracy of algorithms are important, more emphasis should be placed on how they are used within institutional settings. And second, fairness is not a single idea. Mandates for certain kinds of fairness could come at the expense of others forms of fairness. As always, policymakers need to be cognizant of the trade offs.  


Statistical significance versus social significance


The ProPublica study arrived at a critical junction in the conversation over algorithms. In the tech space, TensorFlow, Google’s artificial intelligence (AI) engine, had been released in 2015, sparking interest in algorithms and the application of AI for commercial applications. At the same time, in the political arena, sentencing reform was gaining steam. Senators Rand Paul and Cory Booker helped bring wider attention to the need for reforms to the criminal justice system through their efforts in passing the REDEEM Act. Indeed, when the Koch Brothers’ political network announced more than $5 million in spending for criminal justice reform, the Washington Post noted that it underscored “prison and sentencing reform’s unique position as one of the nation’s most widely discussed policy proposals as well as one with some of the most broad political backing.”


Model selection is a critical component of any study, so it is no wonder that criticism of risk assessment algorithms have focused on this aspect of the process. Error bars might reflect precision, but they tell us little about a model’s applicability. More importantly however, the implementation isn’t frictionless. People have to use them to make decisions. Algorithms must be integrated within a set of processes that involve the messiness of human relations. Because of the variety of institutional settings, there is sure to be significant variability in how they come to be used. The impact of real decision-making processes isn’t constrained only by the accuracy of the models, but also the purposes to which they are applied.


In other words, the social significance of these models, how they come to be used in practice, is a pertinent question for policy makers just the same as their statistical significance is.


Angèle Christin, a professor at Stanford who studies these topics, made the issue abundantly clear when she noted,


Yet it is unclear whether these risk scores always have the meaningful effect on criminal proceedings that their designers intended. During my observations, I realized that risk scores were often ignored. The scores were printed out and added to the heavy paper files about defendants, but prosecutors, attorneys, and judges never discussed them. The scores were not part of the plea bargaining and negotiation process. In fact, most of judges and prosecutors told me that they did not trust the risk scores at all. Why should they follow the recommendations of a model built by a for-profit company that they knew nothing about, using data they didn’t control? They didn’t see the point. For better or worse, they trusted their own expertise and experience instead. (emphasis added)


Christin’s on the ground experience urges scholars to consider how these algorithms have come to be implemented in practice. As she points out, institutions engage in various kinds rituals to appear modern, chief among them being the acquisition of new technological tools. Changing practices within workplaces is a much more difficult task than reformers would like to imagine. Instead, a typical reaction by those who have long worked within a system is to  manipulate the tool to look compliant.


The implementation of pretrial risk assessment instruments highlights the potential variability when algorithms are deployed. These instruments can help guide judges when decisions are made about what is going to happen to a defendant before a trial. Will the defendant be put on bail and what will be the cost? The most popular of these instruments is known as the Public Safety Assessment or simply the PSA, which was developed by the Laura and John Arnold Foundation and has been adopted in over 30 jurisdictions in the last five years.


The adoption of the PSA across regions helps to demonstrate just how disparate implementation can be. In New Jersey, the adoption of the PSA seems to have correlated with a dramatic decline in the pretrial detention rate. In Lucas County, Ohio the pretrial detention rate increased after the PSA was put into place. In Chicago, judges seem to be simply ignoring the PSA. Indeed, there appears to be little agreement on how well the PSA’s high-risk classification corresponds to reality, as re-arrest can be as low as 10 percent or as high as 42 percent, depending on how the PSA is integrated in a region.


And in the most comprehensive study of its kind, George Mason University law professor Megan Stevenson looked at Kentucky after it implemented the PSA and found significant changes in bail-setting practices, but only a small increase in pretrial release. Over time these changes eroded as judges returned to their previous habits. If this tendency to revert back to the mean is widespread, then why even implement these pre-trial risk instruments?


Although it was focused on pretrial risk assessments, Stevenson’s call for a broader understanding of these tools applies to the entirety of algorithm research:


Risk assessment in practice is different from risk assessment in the abstract, and its impacts depend on context and details of implementation. If indeed risk assessment is capable of producing large benefits, it will take research and experimentation to learn how to achieve them. Such a process would be evidence-based criminal justice at its best: not a flocking towards methods that bear the glossy veneer of science, but a careful and iterative evaluation of what works and what does not.


Algorithms are tools. While it is important to understand how well calibrated the tool is, researchers needs to be focused on how that tool impacts real people working with and within institutions with embedded cultural and historic practices.


Trade offs in fairness determinations


Julia Angwin and her team at ProPublica helped to spark a new interest in algorithmic decision-making when they dove deeper into a commonly used post trial sentencing tool known as COMPAS. Instead of predicting behavior before a trial takes place, COMPAS purports to predict a defendant’s risk of committing another crime in the sentencing phase after a defendant has been found guilty. As they discovered, the risk system was biased against African-American defendants, who were more likely to be incorrectly labeled as higher-risk than they actually were. At the same time, white defendants were labeled as lower-risk than they was actually the case.


Superficially, that seems like  a simple problem to solve. Just add features to the algorithm that consider race and rerun the tool. If only the algorithm payed attention to this bias, the outcome could be corrected. Or so goes the thinking.


But let’s take a step back and consider really what these tools represent. The task of the COMPAS tool is to estimate the degree to which people possess a likeliness for future risk. In this sense, the algorithm aims for calibration, one of at least three distinct ways we might understand fairness. Aiming for fairness through calibration means that people were correctly identified as having some probability of committing an act. Indeed, as subsequent research has found, the number of people who committed crimes were correctly distributed within each group. In other words, the algorithm did correctly identify a set of people as having a probability of committing a crime.


Angwin’s criticism is of another kind, as Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan explain in “Inherent Trade-Offs in the Fair Determination of Risk Scores.” The kind of fairness that Angwin aligns with might be understand as a balance for the positive class. To violate this kind of fairness notion, people would be later identified as being part of the class, yet they were predicted initially as having a lower probability by the algorithm. For example, as the ProPublica study found, white defendants that did commit crimes in the future were assigned lower risk scores. This would be a violation of balance for the positive class.


Similarly, balance for a negative class is the negative correlate. To violate this kind of fairness notion, people that would be later identified as not being part of the class would be predicted initially as having a higher probability of being part of it by the algorithm. Both of these conditions try to capture the idea that groups should have equal false negative and false positive rates.


After formalizing these three conditions for fairness, Kleinberg, Mullainathan, and Raghavan proved that it isn’t possible to satisfy all constraints simultaneously except in highly constrained special cases. These results hold regardless of how the risk assignment is computed, since “it is simply a fact about risk estimates when the base rates differ between two groups.”


What this means is that some views of fairness might simply be incompatible with each other. Balancing for one kind of notion of fairness is likely to come at the expense of another.


This trade off is really a subclass of a larger problem that is of central focus in data science, econometrics, and statistics. As Pedro Domingos noted:


You should be skeptical of claims that a particular technique “solves” the overfitting problem. It’s easy to avoid overfitting (variance) by falling into the opposite error of underfitting (bias). Simultaneously avoiding both requires learning a perfect classifier, and short of knowing it in advance there is no single technique that will always do best (no free lunch).


Internalizing these lessons about fairness requires a shift in framing. For those working in the AI field, actively deploying algorithms, and especially for policy makers, fairness mandates will likely create trade offs. If most algorithms cannot achieve multiple notions of fairness simultaneously, then every decision to balance for class attributes is likely to take away from efficiency elsewhere. This isn’t to say that we shouldn’t strive to optimize fairness. Rather, it is simply important to recognize that mandating of one type of fairness may necessarily come at the expense of a different type of fairness.


Understanding the internal logic of risk assessment tools is not the end of the conversation. Without data of how they are used, it could be that these algorithm entrench bias, uproot it, or have ambiguous effects. To have an honest conversation, we need to understand how they nudge decisions in the real world.

 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2018 11:12

June 12, 2018

National Academies Report Rips FAA’s Risk-Averse Regulatory Culture 

The National Academies of Sciences, Engineering, and Medicine has released an amazing new report focused on, “Assessing the Risks of Integrating Unmanned Aircraft Systems (UAS) into the National Airspace System.” In what the Wall Street Journal rightly refers to as an “unusually strongly worded report,” the group of experts assembled by the National Academies call for a sea change in regulatory attitudes and policies toward regulation of Unmanned Aircraft Systems (or “drones”) and the nation’s airspace more generally.


The report uses the term “conservative” or “overly conservative” more than a dozen times to describe the Federal Aviation Administration’s (FAA) problematic current approach toward drones. They point out that the agency has “a culture with a near-zero tolerance for risk,” and that the agency needs to adjust that culture to take into account “the various ways in which this new technology may reduce risk and save lives.” (Ch. S, p.2) The report continues on to say that:


The committee concluded that “fear of making a mistake” drives a risk culture at the FAA that is too often overly conservative, particularly with regard to UAS technologies, which do not pose a direct threat to human life in the same way as technologies used in manned aircraft. An overly conservative attitude can take many forms. For example, FAA risk avoidance behavior is often rewarded, even when it is excessively risk averse, and rewarded behavior is repeated behavior. Balanced risk decisions can be discounted, and FAA staff may conclude that allowing new risk could endanger their careers even when that risk is so minimal that it does not exceed established safety standards.  The committee concluded that a better measure for the FAA to apply is to ask the question, “Can we make UAS as safe as other background risks that people experience daily?” As the committee notes, we do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft.


[. . . ]


In many cases, the focus has been on “What might go wrong?” instead of a holistic risk picture: “What is the net risk/benefit?” Closely related to this is what the committee considers to be paralysis wherein ever more data are often requested to address every element of uncertainty in a new technology. Flight experience cannot be gained to generate these data due to overconservatism that limits approvals of these flights. Ultimately, the status quo is seen as safe. There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks. (p. S-2)


Importantly, the report makes it clear that the problem here is not just that “an overly conservative risk culture that overestimates the severity and the likelihood of UAS risk can be a significant barrier to introduction and development of these technologies,” but, more profoundly, the report highlights how,  “Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (p. 3-6,7) In other words, we should want a more open and common sense-oriented approach to drones, not only to encourage more life-enriching innovation, but also because it could actually make us safer as a result.


No Reward without Some Risk

What the National Academies report is really saying here is that there can be no reward without some risk.  This is something I have spent a great deal of time writing about in my last book, a recent book chapter, and various other essays and journal articles over the past 25 years.  As I noted in my last book, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”  If we want a wealthier, healthier, and safer society, we must embrace change and risk-taking to get us there.


This is exactly what that National Academies report is getting at when they note that the FAA”s “overly conservative culture prevents safety beneficial operations from entering the airspace. The focus is on what might go wrong. More dialogue on potential benefits is needed to develop a holistic risk picture that addresses the question, What is the net risk/benefit?” (p. 3-10)


In other words, all safety regulation involves trade-offs, and if (to paraphrase a classic Hardin cartoon you’ll see to your right) we consider every potential risk except the risk of avoiding all risks, the result will be not only a decline in short-term innovation, but also a corresponding decline in safety and overall living standards over time.



Countless risk scholars have studied this process and come to the same conclusion. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic, of course. “The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement,” observed H.L. Lewis, an expert on technological risk trade-offs.


The most important book ever written on this topic was Aaron Wildavsky’s 1988 masterpiece, Searching for Safety. Wildavsky warned of the dangers of “trial without error” reasoning and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that real wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. As he put it:


The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.


When this logic takes the form of public policy prescriptions, it is referred to as the “precautionary principle,” which generally holds that, because new ideas or technologies could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.


Again, if we adopt that attitude, human safety actually suffers because it holds back beneficial experiments aimed at improving the human condition. As the great economic historian Joel Mokyr argues, “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” But the regulatory status quo all too often rejects “the unfamiliar and the eccentric” out of an abundance of caution. While usually well-intentioned, that sort of status quo thinking holds back new and better was of doing old things better, or doing all new things. The end result is that real health and safety advances are ignored or forgone.


How Status Quo Thinking at the FAA Results in Less Safety

This is equally true for air safety and FAA regulation of drones. “Ultimately, the status quo is seen as safe,” the National Acadamies report notes. “There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks.” The example of the life-saving potential of drones have already been well-documented.


Drones have already been used to monitor fires, help with search-and-rescue missions for missing people or animals, assist life guards by dropping life vests to drowning people, deliver medicines to remote areas, and help with disaster monitoring and recovery efforts. But that really just scratches the surface in terms of their potential.


Some people scoff at the idea of drones being used to deliver small packages to our offices or homes. But consider how many of those packages are delivered by human-operated vehicles that are far more likely to be involved in dangerous traffic accidents on our over-crowded roadways. If drones were used to make some of those deliveries, we might be able to save a lot of lives. Or how about an elderly person stuck at home during storm, only to realize they are out of some essential good or medicine that is a long drive away. Are we better off having them (or someone else) get behind the wheel to drive and get it, or might a drone be able to deliver it more safely?


The authors of the National Academies report understand this, as they made clear when they concluded that, “operation of UAS has many advantages and may improve the quality of life for people around the world. Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (Ch. 3, p. 5-6)


Reform Ideas: Use the “Innovator’s Presumption” & “Sunsetting Imperative”

Given that reality, the National Academies report makes several sensible reform recommendations aimed at countering the FAA’s hyper-conservatism and bias for the broken regulatory status quo. I won’t go through them all, but I think they are an excellent set of reforms that deserve to be taken seriously.


I do, however, want to highly recommend everyone take a close look at this one outstanding recommendation in Chapter 3, which is aimed at keep things moving and making sure that status quo thinking doesn’t freeze beneficial new forms of airspace innovation. Specifically, the National Academies report recommends that:


The FAA should meet requests for certifications or operations approvals with an initial response of “How can we approve this?” Where the FAA employs internal boards of executives throughout the agency to provide input on decisions, final responsibility and authority and accountability for the decision should rest with the executive overseeing such boards. A time limit should be placed on responses from each member of the board, and any “No” vote should be accompanied with a clearly articulated rationale and suggestion for how that “No” vote could be made a “Yes.” (Ch. 3, p. 8)


I absolutely love this reform idea because it essentially combines elements of two general innovation policy reform ideas that I discussed in my recent essay, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” In that piece, I proposed the idea of instituting an “Innovator’s Presumption” that would read: “Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.” I also proposed a so-called “Sunsetting Imperative” that would read: “Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.”


The National Academies report recommendation above basically embodies the spirit of both the Innovator’s Presumption and the Sunsetting Imperative. It puts the burden of proof on opponents of change and then creates a sort of shot clock to keep things moving.


These are the kind of reforms we need to make sure status quo thinking at regulatory agencies doesn’t hold back life-enriching and life-saving innovations. It’s time for a change in the ways business is done at the FAA to make sure that regulations are timely, effective, and in line with common sense. Sadly, as the new National Academies report makes clear, today’s illogical policies governing airspace innovation are having counter-productive results that hurt society.


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2018 17:39

June 11, 2018

Why Women Should Love the “Net Neutrality” Repeal

The Internet is a great tool for women’s empowerment, because it gives us the freedom to better our lives in ways that previously far more limited. Today, the FCC’s Restoring Internet Freedom Order helped the Internet become even freer.


There is a lot of misinformation and scare tactics about the previous administration’s so-called “net neutrality” rules. But the Obama-era Open Internet Order regulations were not neutral at all. Rather, they ham-handedly forced Internet Service Providers (ISPs) into a Depression-era regulatory classification known as a Title II common carrier. This would have slowed Internet dynamism, and with it, opportunities for women.


Today’s deregulatory move by the FCC reverses that decision, which will allow more ISPs to enter the market. More players in the market make Internet service better, faster, cheaper, and more wildly available. This is especially good for women who have especially benefited from the increased connectivity and flexibility that the Internet has provided.



The growth of the Internet has enabled women to connect with others and pursue economic opportunities like operating a small business that had higher cost and more barriers in the pre-digital age. From 2007 to 2015, the number of women-owned businesses grew by more than 65%. For minority women, the rate of business ownership has nearly tripled since 1997. This is in no small part thanks to how much faster and easier the Internet makes it to sell goods or services across the country or around the world. Now, the mom who does monogramming on the side or makes awesome salsa is no longer limited to selling locally, but can become a global entrepreneur through platforms like Etsy. While there are still many barriers to entry facing female entrepreneurs, the Internet has knocked down startup costs and broadened the market for their goods.


Faster Internet also allows more companies to offer flexible working opportunities. These are especially useful for working moms who need  more choices to balance parenting and work. A 2016 survey by Working Mother magazine found that 80% of mothers were more productive when allowed to use some type flexwork, much of which has been enabled by faster, better, and cheaper Internet. As companies invest more in 5G and expand access to broadband, the ability to connect will only get faster and easier. More does need to be done to ensure that women aren’t punished in their careers for taking advantage of flexwork opportunities. But the increase in flexwork gives everyone, but especially women, more options to pursue what is best for them and their families.


Finally, there has been a lot of fearmongering that ISPs will block access to feminist websites in a post-regulatory world. This is nonsense. For one, ISPs were already legally allowed to block content under the original regulations. In fact, some small ISPs purposefully marketed their blocking of questionable content for religious families.


As my colleague Brent Skorup has explained, the “net neutrality” that most people claim to support is not at all what the 2015 regulations accomplished. We all want an Internet that gives people access to the vast array of information available. But when someone says that most people favor “net neutrality” or that “net neutrality” protects women or marginalized groups; well, to quote The Princess Bride: “You keep using that word. I do not think it means what you think it means.” It’s easy to be in favor of the version of “net neutrality” that is portrayed in some surveys and media, but the reality is the Title II restrictions were far more about regulating the Internet like an old school landline than they were about promoting access or preventing throttling.


The great thing about the freedom of the Internet is that it allows people to connect and pursue new opportunities. From beauty vloggers to Etsy entrepreneurs to the mother who wants to help her children with homework, the Internet has especially opened new opportunities for women.


ISPs will still have to engage in competition under the watch of the Federal Trade Commission, but it will become more affordable and easier for smaller ISPs to enter the market. We should not act like Chicken Little and assume the sky is falling now that some two-year-old regulations are being repealed. Rather, we should be excited for the increasing opportunities that Restoring Internet Freedom Order will provide. And for those who want to empower women, the greater chance for more services is likely to provide them with an easier, better, and faster way to do so.

 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2018 08:12

11th Circuit LabMD Decision Rewrites FTC Unfairness Test – In Dicta?

Last week the U.S. Court of Appeals for the 11th Circuit vacated a Federal Trade Commission order requiring medical diagnostic company LabMD to adopt reasonable data security, handing the FTC a loss on an important data security case.  In some ways, this outcome is not surprising.  This was a close case with a tenacious defendant which raised important questions about FTC authority, how to interpret “unfairness” under the FTC Act, and the Commission’s data security program.


Unfortunately, the decision answers none of those important questions and makes a total hash of the FTC’s current unfairness law. While some critics of the FTC’s data security program may be pleased with the outcome of this decision, they ought to be concerned with its reasoning, which harkens back to the “public policy” test for unfairness that was greatly abused by the FTC in the 1970’s.


The most problematic parts of this decision are likely dicta, but it is still worth describing how sharply this decision conflicts with the FTC’s modern unfairness test.  The court’s reasoning could implicate not only the FTC’s data security authority but its overall authority to police unfair practices of any kind.


(I’m going to skip the facts and procedural background of the case because the key issues are matters of law unrelated to the facts of the case. The relevant facts and procedure are laid out in the decision’s first and most lucid section. I’m also going to limit this piece to the decision’s unfairness analysis. There’s more to say about the court’s conclusion that the FTC’s order is unenforceable, but this post is already long. Interesting takes here and here.)


In short, the court’s decision attempts to rewrite a quarter century of FTC unfairness law.  By doing so, it elevates a branch of unfairness analysis that, in the 1970s, landed the FTC in big trouble.  First, I’ll summarize the current unfairness test as stated in the FTC Act. Next, I’ll discuss the previous unfairness test, the trouble it caused, and how that resulted in the modern test. Finally, I’ll look at how the LabMD decision rejects the modern test and discuss some implications.


The Modern Unfairness Test


If you’ve read a FTC complaint with an unfairness count in the last two decades, you’re probably familiar with the modern unfairness test.  A practice is unfair if it causes substantial injury that the consumer cannot avoid, and which is not outweighed by benefits to consumers or competition.  In 1994, Congress codified this three-part test in Section 5(n) of the FTC Act, which reads in full:


The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice [1] causes or is likely to cause substantial injury to consumers which [2] is not reasonably avoidable by consumers themselves and [3] not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination. [Emphasis added]


The text of Section 5(n) makes two things clear: 1) a practice is not unfair unless it meets the three-part consumer injury test and 2) public policy considerations can be helpful evidence of unfairness but are not sufficient or even necessary to demonstrate it. Thus, the three-part consumer injury test is centrally important to the unfairness analysis. Indeed, the three-part consumer injury test set out in Section 5(n) has been synonymous with the unfairness test for decades.


The Previous, Problematic Test for Unfairness


But the unfairness test used to be quite different.  In outlining the test’s history, I am going to borrow heavily from Howard Beales’ excellent 2003 essay, “The FTC’s Use of Unfairness Authority: Its Rise, Fall, and Resurrection.” (Beales was the Director of the FTC’s Bureau of Consumer Protection under Republican FTC Chairman Timothy Muris.) Beales describes the previous test for unfairness:


In 1964 … the Commission set forth a test for determining whether an act or practice is “unfair”: 1) whether the practice “offends public policy” – as set forth in “statutes, the common law, or otherwise”; 2) “whether it is immoral, unethical, oppressive, or unscrupulous; 3) whether it causes substantial injury to consumers (or competitors or other businessmen).” …. [T]he Supreme Court, while reversing the Commission in Sperry & Hutchinson cited the Cigarette Rule unfairness criteria with apparent approval….


This three-part test – public policy, immorality, and/or substantial injury – gave the agency enormous discretion, and the FTC began to wield that discretion in a problematic manner. Beales describes the effect of the S&H dicta:


Emboldened by the Supreme Court’s dicta, the Commission set forth to test the limits of the unfairness doctrine. Unfortunately, the Court gave no guidance to the Commission on how to weigh the three prongs – even suggesting that the test could properly be read disjunctively.


The result was a series of rulemakings relying upon broad, newly found theories of unfairness that often had no empirical basis, could be based entirely upon the individual Commissioner’s personal values, and did not have to consider the ultimate costs to consumers of foregoing their ability to choose freely in the marketplace. Predictably, there were many absurd and harmful results.


According to Beales, “[t]he most problematic proposals relied heavily on ‘public policy’ with little or no consideration of consumer injury.”  This regulatory overreach triggered a major backlash from businesses, Congress, and the media. The Washington Post called the FTC the “National Nanny.” Congress even defunded the agency for a time.


The backlash prompted the agency to revisit the S&H criteria.  As Beales describes,


As the Commission struggled with the proper standard for unfairness, it moved away from public policy and towards consumer injury, and consumer sovereignty, as the appropriate focus…. On December 17, 1980, a unanimous Commission formally adopted the Unfairness Policy Statement, and declared that “[un]justified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria.”


This Unfairness Statement recast the relationship between the three S&H criteria, discarding the “immoral” prong entirely and elevating consumer injury above public policy: “Unjustified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria. By itself it can be sufficient to warrant a finding of unfairness.” [emphasis added]  It was this Statement that first established the three-part consumer injury test now codified in Section 5(n).


Most importantly for our purposes, the statement explained the optional nature of the S&H “public policy” factor. As Beales details,


[I]n most instances, the proper role of public policy is as evidence to be considered in determining the balance of costs and benefits”  although ”public policy can ‘independently support a Commission action . . . when the policy is so clear that it will entirely determine the question of consumer injury, so there is little need for separate analysis by the Commission.’” [emphasis added]


In a 1982 letter to Congress, the Commission reiterated that public policy “is not a necessary element of the definition of unfairness.”


As the 1980s progressed, the Unfairness Policy statement, specifically the three-part test for consumer injury, “became accepted as the appropriate test for determining unfairness…” But not all was settled.  Beales again:


The danger of unfettered “public policy” analysis as an independent basis for unfairness still existed, however [because] the Unfairness Policy Statement itself continued to hold out the possibility of public policy as the sole basis for a finding of unfairness. A less cautious Commission might ignore the lessons of history, and dust off public policy-based unfairness. … When Congress eventually reauthorized the FTC in 1994, it codified the three-part consumer injury unfairness test. It also codified the limited role of public policy. Under the statutory standard, the Commission may consider public policies, but it cannot use public policy as an independent basis for finding unfairness. The Commission’s long and dangerous flirtation with ill-defined public policy as a basis for independent action was over.


Flirting with Public Policy, Again


To sum up, chastened for overreaching its authority using the public policy prong of the S&H criteria, the FTC refocused its unfairness authority on consumer injury.  Congress ratified that refocus in Section 5(n) of the FTC Act, as I’ve discussed above. Today, under modern unfairness law, FTC complaints rarely make public policy arguments and only then to bolster evidence of consumer injury.


In last week’s LabMD decision, the 11th Circuit rejects this long-standing approach to unfairness. Consider these excerpts from its decision:


“The Commission must find the standards of unfairness it enforces in ‘clear and well-established’ policies that are expressed in the Constitution, statutes, or the common law.”


“An act or practice’s ‘unfairness’ must be grounded in statute, judicial decisions – i.e., the common law – or the Constitution. An act or practice that causes substantial injury but lacks such grounding is not unfair within Section 5(a)’s meaning.”


“Thus, an ‘unfair’ act or practice is one which meets the consumer-injury factors listed above and is grounded in well-established legal policy.”


And consider this especially salty bite of pretzel logic based on a selective citation of the FTC Act:


“Section 5(n) now states, with regard to public policy, ‘In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.’  We do not take this ambiguous statement to mean that the Commission may bring suit purely on the basis of substantial consumer injury. The act or practice alleged to have caused injury must still be unfair under a well-established legal standard, whether grounded in statute, the common law, or the Constitution.” [emphasis added]


Yet those two sentences in 5(n) are quite clear when read in context with the full paragraph, which requires the three-part consumer injury test but merely permits the FTC to consider public policies as evidence.  The court’s interpretation here is also undercut by the FTC’s historic misuse of public policy and Congress’s subsequent intent in Section 5(n) to limit the FTC overreach by restricting use of public policy evidence. Congress sought to restrict the FTC’s use of public policy; the 11th Circuit’s decision seeks to require it.


To be fair, the court is not exactly returning to the wild pre-Unfairness Statement days when the FTC thought public policy alone was sufficient to find an act or practice unfair.  Instead, the court has developed a new, stricter test for unfairness that requires both consumer injury and offense to public policy.


After crafting this bespoke unfairness test by inserting a mandatory public policy element, the decision then criticizes the FTC complaint for “not explicitly” citing the public policy source for its “standard of unfairness.”  But it is obvious why the FTC didn’t include a public policy element in the complaint – no one has thought it necessary, for more than two decades.  (Note, however, that the Commission’s decision does cite numerous statutes and common law principles as public policy evidence of consumer injury in this case.)


The court supplies the missing public policy element for the FTC: “It is apparent to us, though, that the source is the common law of negligence.” The court then determines that “the Commission’s action implies” that common law negligence “is a source that provides standards for determining whether an act or practice is unfair….”


Having thus rewritten the Commission’s argument and decades of FTC law, the court again surprises. Rather than analyze LabMD’s liability under this new standard, the court “assumes arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.”


Thus, the court does not actually rely on the unfairness test it has set out, arguably rendering that entire analysis dicta.


Why Dicta?


What is going on here? I believe the court is suggesting how data security cases ought to be pled, even though it cannot require this standard under Section 5(n) – and perhaps would not want to, given the collateral effect on other types of unfairness cases.


The court clearly wanted to signal something through this exercise.  Otherwise, it would have been much easier to have assumed arguendo LabMD’s liability under the existing three prong consumer injury unfairness test contained in the FTC’s complaint.  Instead, the court constructs a new unfairness test, interprets the FTC’s complaint to match it, and then appears to render its unfairness analysis dicta.


So, what exactly is the court signaling? This new unfairness test is stricter than the Section 5(n) definition of unfairness, and thus any complaint that satisfies the LabMD test would also satisfy the statutory test.  Thus, perhaps the court seeks to encourage the FTC to plead data security complaints more strictly than legally necessary by including references to public policy.


Had the court applied its bespoke standard to find that LabMD was not liable, I think the FTC would have had no choice but to appeal the decision.  By upsetting 20+ years of unfairness law, the court’s analysis would have affected far more than just the FTC’s data security program.  The FTC brings many non-data security cases under its unfairness authority, including illegal bill cramming and unauthorized payment processing and other types of fraud where deception cannot adequately address the problem. The new LabMD unfairness test would affect many such areas of FTC enforcement. But by assuming arguendo LabMD’s liability, the court may have avoided such effects and thus reduced the FTC’s incentive to appeal on these grounds.


Dicta or not, appeal or not, the LabMD decision has elevated unfairness’s “public policy” factor. Given the FTC’s misuse of that factor in the past, FTC watchers ought to keep an eye out.


—-


Last week’s LabMD decision will shape the constantly evolving data security policy environment.  At the Charles Koch Institute, we believe that a healthy data security policy environment will encourage permissionless innovation while addressing real consumer harms as they arise.  More broadly, we believe that innovation and technological progress are necessary to achieve widespread human flourishing.  And we seek to foster innovation-promoting environments through educational programs and academic grant-making.

 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2018 08:05

May 15, 2018

No, “83% of Americans” do not support the 2015 net neutrality regulations

Lawmakers frequently hear impressive-sounding stats about net neutrality like “83% of voters support keeping FCC’s net neutrality rules.” This 83% number (and similar “75% of Republicans support the rules”) is based on a survey from the Program for Public Consultation released in December 2017, right before the FCC voted to repeal the 2015 Internet regulations.


These numbers should be treated with skepticism. This survey generates these high approval numbers by asking about net neutrality “rules” found nowhere in the 2015 Open Internet Order. The released survey does not ask about the substance of the Order, like the Title II classification, government price controls online, or the FCC’s newly-created authority to approve of and disapprove of new Internet services.


Here’s how the survey frames the issue:


Under the current regulations, ISPs are required to:   


provide customers access to all websites on the internet.   


provide equal access to all websites without giving any websites faster or slower download speeds.  


The survey then essentially asks the participant if they favor these “regulations.” The nearly 400-page Order is long and complex and I’m guessing the survey creators lacked expertise in this area because this is a serious misinterpretation of the Order. This framing is how net neutrality advocates discuss the issue, but the Obama FCC’s interpretations of the 2015 Order look nothing like these survey questions.


Let’s break down these rules ostensibly found in the 2015 Order.


“ISPs are required to provide customers access to all websites on the internet”


This is wrong. The Obama FCC was quite clear in the 2015 Order and during litigation that ISPs are free to filter the Internet and block websites. From the oral arguments:


FCC lawyer: “If [ISPs] want to curate the Internet…that would drop them out of the definition of Broadband Internet Access Service.”

Judge Williams: “They have that option under the Order?”

FCC lawyer: “Absolutely, your Honor. …If they filter the Internet and don’t provide access to all or substantially all endpoints, then…the rules don’t apply to them.”


As a result, the judges who upheld the Order said, “The Order…specifies that an ISP remains ‘free to offer ‘edited’ services’ without becoming subject to the rule’s requirements.”


Further, in the 1996 Telecom Act, Congress gave Internet access providers legal protection in order to encourage them to block lewd and “objectionable content.” Today, many ISPs offer family-friendly Internet access that blocks, say, pornographic and violent content. An FCC Order cannot and did not rewrite the Telecom Act and cannot require “access to all websites on the internet.”


“ISPs are required to provide equal access to all websites without giving any websites faster or slower download speeds”


Again, wrong. There is no “equal access to all websites” mandate (see above). Further, the 2015 Order allows ISPs to prioritize certain Internet traffic because preventing prioritization online would break Internet services.


This myth–that net neutrality rules require ISPs to be dumb pipes, treating all bits the same–has been circulated for years but is derided by network experts. MIT computer scientist and early Internet developer David Clark colorfully dismissed this idea as “happy little bunny rabbit dreams.” He pointed out that prioritization has been built into Internet protocols for years and “[t]he network is not neutral and never has been.” 


Other experts, such as tech entrepreneur and investor Mark Cuban and President Obama’s former chief technology officer Aneesh Chopra, have observed that the need for Internet “fast lanes” as Internet services grow more diverse. Further, the nature of interconnection agreements and content delivery networks mean that some websites pay for and receive better service than others.


This is not to say the Order is toothless. It authorizes government price controls and invents a vague “general conduct standard” that gives the agency broad authority to reject, favor, and restrict new Internet services. The survey, however, declined to ask members of the public about the substance of the 2015 rules and instead asked about support for net neutrality slogans that have only a tenuous relationship with the actual rules.


“Net neutrality” has always been about giving the FCC, our national media regulator, vast authority to regulate the Internet. In doing so, the 2015 Order rejects the 20-year policy of the United States, codified in law, that the Internet and Internet services should be “unfettered by Federal or State regulation.” The US tech and telecom sector thrived before 2015 and the 2017 repeal of the 2015 rules will reinstate, fortunately, that light-touch regulatory regime.

 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2018 12:03

April 30, 2018

Some thoughts on the T-Mobile-Sprint merger

Mobile broadband is a tough business in the US. There are four national carriers–Verizon, AT&T, T-Mobile, and Sprint–but since about 2011, mergers have been contemplated (and attempted, but blocked). Recently, the competition has gotten fiercer. The higher data buckets and unlimited data plans have been great for consumers.


The FCC’s latest mobile competition report, citing UBS data, says that industry ARPU (basically, monthly revenue per subscriber), which had been pretty stable since 1998, declined significantly from 2013 to 2016 from about $46 to about $36. These revenue pressures seemed to fall hardest on Sprint, who in February, issued $1.5 billion of “junk bonds” to help fund its network investments. Further, mobile TV watching is becoming a bigger business worth investing in. AT&T and Verizon both plan to offer a TV bundle to their wireless customers this year, and T-Mobile’s purchase of Layer3 indicates an interest in offering a mobile TV service.


It’s these trends that probably pushed T-Mobile and Sprint to announce yesterday their intention to merge. All eyes will be on the DOJ and the FCC as their competition divisions consider whether to approve the merger.


The Core Arguments


Merger opponents’ primary argument is what’s been raised several times since the 2011 AT&T-T-Mobile aborted merger: this “4 to 3” merger significantly raises the prospect of “tacit collusion.” After the merger, the story goes, the 3 remaining mobile carriers won’t work as hard to lower prices or improve services. While outright collusion on prices is illegal, they have a point that tacit collusion is more difficult for regulators to prove, to prevent, and to prosecute.


The counterargument, that T-Mobile and Sprint are already making, is that “mobile” is not a distinct market anymore–technologies and services are converging. Therefore, tacit collusion won’t be feasible because mobile broadband is increasingly in competition with with landline broadband providers (like Comcast and Charter), and possibly even media companies (like Netflix and Disney). Further, they claim, T-Mobile and Sprint, going it alone, will each struggle to deploy a capex-intensive 5G network that can compete with AT&T, Verizon, Comcast-NBCU, and the rest, but the merged company will be a formidable competitor in TV and in consumer and enterprise broadband.


Competitive Review


Any prediction about whether the deal will be approved or denied is premature. This is a horizontal merger in a highly-visible industry and it will receive an intense antitrust review. (Rachel Barkow and Peter Huber have an informative 2001 law journal article about telecom mergers at the DOJ and FCC.) The DOJ and FCC will seek years of emails and financial documents from Sprint and T-Mobile executives and attempt to ascertain the “real” motivation for the merger and its likely consumer effects.


T-Mobile and Sprint will likely lean on evidence that consumers view mobile broadband and TV as a substitute for landline broadband and TV. Much like phone and TV went from “local markets with one or two competitors” years ago to a “national market with several competitors,” their story seems to be, broadband is following a similar trajectory and viewing this as a 4 to 3 merger misreads industry trends.


There’s preliminary evidence that mobile broadband will put competitive pressure on conventional, landline broadband. Census surveys indicate that in 2013, 10% of Internet-using households were mobile Internet only (no landline Internet). By 2015, about 20% of households were mobile-only, and the proportion of Internet users who had landline broadband actually fell from 82% to 75%. But this is still preliminary and I haven’t seen economic evidence yet that mobile is putting pricing pressure on landline TV and broadband.


FCC Review


Antitrust review is only one step, however. The FCC transaction review process is typically longer and harder to predict. The FCC has concurrent authority with the DOJ under the Clayton Act to review telecommunications mergers under Sections 7 and 11 of the Clayton Act but it has never used that authority. Instead, the FCC uses spectrum transfers as a hook to evaluate mergers using the Communication Act’s (vague) “public interest standard.” Unlike antitrust standards, which generally put the burden on regulators to show consumer and competitive harm, the public interest standard as currently interpreted puts the burden on merging companies to show social and competitive benefits.


Hopefully the FCC will hew to a more rigorous Clayton Act inquiry and reform the open-ended public interest inquiry. As Chris Koopman and I wrote for the law journal a few years ago, these FCC  “public interest” reviews are sometimes excessively long and advocates use the vague standards to force the FCC into ancillary concerns, like TV programming decisions and “net neutrality” compliance.


Part of the public interest inquiry is a complex “spectrum screen” analysis. Basically, transacting companies can’t have too much “good” spectrum in a single regional market. I doubt the spectrum screen analysis would be dispositive (much of the analysis in the past seemed pretty ad hoc), but I do wonder if it will be an issue raised since this was a major issue raised in the AT&T-T-Mobile attempted merger.


In any case, that’s where I see the core issues, though we’ll learn much more as the merger reviews commence.

 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2018 13:20

April 27, 2018

Video: The Dangers of Regulating Information Platforms

On March 19th, I had the chance to debate Franklin Foer at a Patrick Henry College event focused on the question, “Is Big Tech Big Brother?” It was billed as a debate over the role of technology in American society and whether government should be regulating media and technology platforms more generally.  [The full event video is here.] Foer is the author of the new book, World Without Mind: The Existential Threat of Big Tech, in which he advocates a fairly expansive regulatory regime for modern information technology platforms. He is open to building on regulatory ideas from the past, including broadcast-esque licensing regimes, “Fairness Doctrine”-like mandates for digital intermediaries, “fiduciary” responsibilities, beefed-up antitrust intervention, and other types of controls. In a review of the book for Reason, and then again during the debate at Patrick Henry University, I offered some reflections on what we can learn from history about how well ideas like those worked out in practice.


My closing statement of the debate, which lasted just a little over three minutes, offers a concise summation of what that history teaches us and why it would be so dangerous to repeat the mistakes of the past by wandering down that disastrous path again. That 3-minute clip is posted below. (The audience was polled before and after the event and asked the same question each time: “Do large tech companies wield too much power in our economy, media and personal lives and if so, should government(s) intervene?” Apparently at the beginning, the poll was roughly Yes – 70% and No – 30%, but after the debated ended it has reversed, with only 30% in favor of intervention and 70% against. Glad to turn around some minds on this one!)


via ytCropper

 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2018 11:13

April 25, 2018

How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google

Image result for Zuckerberg Schmidt laughing


Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”


With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.



Others now appear to be coming around to this view. On April 23, both the New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov.


“In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.”


Meanwhile, the WSJ article opens with this interesting story about the epiphany EU regulator Věra Jourová had upon visiting with the supposed victims of the EU’s new General Data Protection Regulation, or GDPR:


When the European Union’s justice commissioner traveled to California to meet with Google and Facebook last fall, she was expecting to get an earful from executives worried about the Continent’s sweeping new privacy law. Instead, she realized they already had the situation under control. “They were more relaxed, and I became more nervous,” said the EU official, Věra Jourová. “They have the money, an army of lawyers, an army of technicians and so on.”


Image result for Google Brin laughingIndeed they do. And that means that they are better positioned to absorb the significant costs of compliance that will be associated with the new GDPR rules, which are somewhat ambiguous and will require a great deal of ongoing interpretation and legal wrangling.  The Journal essay also cites an unnamed Brussels lobbyist for an media-measurement firm saying, “The politicians wanted to teach Google and Facebook a lesson. And yet they favor them.” Consider this paragraph from the WSJ essay about how the two firms worked diligently to come into compliance with the new GDPR regulations:


Once the law passed in spring 2016, Google and Facebook threw people at the problem. Google involved lawyers in the U.S., Ireland, Brussels and elsewhere to pore over contracts and procedures, said people close to the company. Facebook mobilized hundreds of people in what it describes as the largest interdepartmental team it has ever assembled. Facebook lawyers spent a year scrutinizing the law’s lengthy text. Designers and engineers then toiled over how to implement changes, according to Stephen Deadman, Facebook’s global deputy chief privacy officer. During the process, Facebook got frequent access to regulators across Europe. It met with Helen Dixon, the data protection commissioner in Ireland, where the company bases its European operations, and her staff to run through changes Facebook was planning. Ms. Dixon’s agency provided the firm with feedback on the wording of its consent requests, Facebook said.


Now ask yourself how many other smaller existing or new firms would be in a position to do the same thing. Answer: Not many. We’re already seeing the deleterious effects of the GDPR on market structure, the Journal reports. “Some advertisers are planning to shift money away from smaller providers and toward Google and Facebook,” Schechner and Kostov note. And they end their essay with the telling thoughts of Bill Simmons, co-founder and chief technology officer of Dataxu, Boston-based company that helps buy targeted ads, who says, “It is paradoxical. The GDPR is actually consolidating the control of consumer data onto these tech giants.”


The NYT essay included a funny tidbit about how “Some privacy advocates also bristle at the idea that these new restrictions would help already powerful internet companies, noting that is a well-worn argument employed by tech giants to try to prevent future regulation.” That’s a highly unfortunate attitude. If privacy advocates really care about improving the situation on the ground, then the best way to do that is with more and better choices. Sadly, it seems that with each passing day the write off the idea of any new competition emerging to today’s tech giants.


“Can Facebook be replaced?” asks Olivia Solon writing in The Guardian today. Some probably think not, but as Solon notes, “prominent Silicon Valley investor Jason Calacanis, who was an early investor in several high-profile tech companies including Uber certainly hopes so. He has launched a competition to find a ‘social network that is actually good for society,'” and his “Openbook Challenge will offer seven “purpose-driven teams” $100,000 in investment to build a billion-user social network that could replace the technology titan while protecting consumer privacy.” In a blog post announcing the Challenge, Calacanis wrote: “All community and social products on the internet have had their era, from AOL to MySpace, and typically they’re not shut down by the government — they’re slowly replaced by better products. So, let’s start the process of replacing Facebook.”


I don’t have any idea whether this Openbook Challenge will succeed. It’s hard building big, scalable digital platforms that satisfy the diverse needs of a diverse world. But this is exactly the sort of innovation that we should be encouraging. Even the very threat of new competition will keep the big dogs on their toes. Alas, all the new regulations being consider will likely just leave us with fewer choices and regulations that probably won’t even do all that much to truly better protect our data or privacy.


But hey, at least it was all well-intentioned!

 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2018 07:25

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.