More on this book
Community
Kindle Notes & Highlights
who will capture the value that better prediction creates?
AI advances, we’ll use prediction machines to reduce uncertainty more broadly. Hence, strategic dilemmas driven by uncertainty will evolve with AI.
a prediction machine’s data become so important that you may need to change your strategy to take advantage of what it has to offer.
because powerful AI tools may go beyond enhancing the productivity of tasks performed in the service of executing against the organization’s strategy and instead lead to changing the strategy itself.
First, lower cost versus more control is a core trade-off. Second, that trade-off is mediated by uncertainty; specifically, the returns to control increase with the level of uncertainty. Major airlines balance lower cost and more control by optimizing the boundaries of where their own activities end and those of their partners begin. If a prediction machine could cut through this uncertainty, then the third ingredient would be present and the balance would shift. Airlines would contract more to their partners.
prediction up front was eliminating the need for costly contract renegotiations.
Economists will tell you that job responsibilities have to become less explicit and more relational. You will evaluate and reward employees based on subjective processes, such as performance reviews that take into account the complexity of the tasks and the employees’ strengths and weaknesses. Such processes are tough to implement because reliance on them to create incentives for good performance requires a great deal of trust. After all, a company can more easily decide to deny you that bonus, salary bump, or promotion based on a subjective review than when the performance measures are
...more
Here, the prediction machine increases uncertainty in the strategic dilemma because evaluating the quality of judgment is difficult, so contracting out is risky. Counterintuitively, better prediction increases the uncertainty you have over the quality of human work performed: you need to keep your reward function engineers and other judgment-focused workers in house.
If that data resides with others, you need a strategy to get it. If the data resides with an exclusive or monopoly provider, then you may find yourself at risk of having that provider appropriate the entire value of your AI. If the data resides with competitors, there may be no strategy that would make it worthwhile to procure it from them. If the data resides with consumers, it can be exchanged in return for a better product or higher-quality service.
If the prediction machine is an input that you can take off the shelf, then you can treat it like most companies treat energy and purchase it from the market, as long as AI is not core to your strategy. In contrast, if prediction machines are to be the center of your company’s strategy, then you need to control the data to improve the machine, so both the data and the prediction machine must be in house.
A key strategic choice is determining where your business ends and another business begins—deciding on the boundary of the firm (e.g., airline partnerships, outsourcing automotive part manufacturing). Uncertainty influences this choice. Because prediction machines reduce uncertainty, they can influence the boundary between your organization and others.
By reducing uncertainty, prediction machines increase the ability to write contracts, and thus increase the incentive for companies to contract out both capital equipment and labor that focuses on data, prediction, and action.
However, prediction machines decrease the incentive for companies to contract out labor that focuses on judgment. Judgment quality is hard to specif...
This highlight has been truncated due to consecutive passage length restrictions.
Since judgment is likely to be the key role for human labor as AI diffuses, in-house employment will rise and contracting out labor will fall.
AI will increase incentives to own data. Still, contracting out for data may be necessary when the predictions that the data provides are not strategically essential to your organization. In such cases, it may be best to purchase predictions directly rather than purchase data and then generate your own predictions.
Our answer comes from our core economic framework: AI-first means devoting resources to data collection and learning (a longer-term objective) at the expense of important short-term considerations such as immediate customer experience, revenue, and user numbers.
These companies already house technical talent that they can use to develop machine learning and
One form of this approach is called adversarial machine learning, which pits the main AI and its objective against another AI that tries to foil that objective.
An AI-first strategy places maximizing prediction accuracy as the central goal of the organization, even if that means compromising on other goals such as maximizing revenue, user numbers, or user experience.
For example, the reward function engineer must understand both the objectives of the organization and the capabilities of the machines. Because machines scale efficiently, if this skill is scarce, then the best engineers will reap the benefits of their work across millions or billions of machines.
“the gale of creative destruction.”
This suggests that historical data may be less useful than many suppose, perhaps because the world changes too quickly.
If AI has scale advantages, reducing the negative effects of monopoly involves trade-offs. Breaking up monopolies reduces the scale, but scale makes AI better. Again, policy is not simple.10
For example, China’s share of papers at the biggest AI research conference grew from 10 percent in 2012 to 23 percent in 2017. Over the same period, the US share fell from 41 percent to 34 percent.13
AI is an area where you need to evolve the algorithm and the data together; a large amount of data makes a large amount of difference.”20 The data advantage only matters if Chinese companies have better access to that data than other companies, and evidence suggests they will.
The trade-off is further complicated because of a free-riding effect. Users want better products trained using personal data, but they prefer that data be collected from other people, not them.
the product becomes worse. It interrupts the user experience. If people do not provide the data, then the AI can’t learn from feedback, limiting its ability to boost productivity and increase income.
There are likely to be opportunities to innovate in a way that assures people as to their data’s integrity and control while allowing the AI to learn. One emerging technology—the blockchain—offers a way of decentralizing databases and lowering the cost of verifying data. Such technologies could be paired with AI to overcome privacy (and indeed security) concerns, especially since they are already used for financial transactions, an area where these issues are paramount.25
Specifically, it can invent and improve itself. While science fiction author Vernor Vinge called the point at which this emerges “the Singularity”
This perspective is useful. Economics tells us that if a superintelligence wants to control the world, it will need resources. The universe has lots of resources, but even a superintelligence has to obey the laws of physics. Acquiring resources is costly.
The first trade-off is productivity versus distribution.
The problem isn’t wealth creation; it’s distribution. AI might exacerbate the income inequality problem for two reasons. First, by taking over certain tasks, AIs might increase competition among humans for the remaining tasks, lowering wages and further reducing the fraction of income earned by labor versus the fraction earned by the owners of capital.
The second trade-off is innovation versus competition.
Faster innovation may benefit society from a short-term perspective but may not be optimal from a social or longer-term perspective.
The third trade-off is performance versus privacy. AIs perform better with more data.