The Age of A.I. and Our Human Future
Rate it:
Open Preview
Read between April 8 - April 15, 2023
56%
Flag icon
This transformation includes so-called cyber weapons, a class of weapons involving dual-use civilian capabilities so that their status as weapons is ambiguous. In some cases, their utility in exercising and augmenting power derives largely from their users’ not disclosing their existence or acknowledging their full range of capabilities.
57%
Flag icon
Uncertainty over the nature, scope, or attribution of a cyber action may render seemingly basic factors a matter of debate—such as whether a conflict has begun, with whom or what the conflict engages, and how far up the escalation ladder the conflict between the parties may be. In that sense, major countries are engaged in a kind of cyber conflict now, though one without a readily definable nature or scope.
58%
Flag icon
The introduction of nonhuman logic to military systems and processes will transform strategy. Militaries and security services training or partnering with AI will achieve insights and influence that surprise and occasionally unsettle. These partnerships may negate or decisively reinforce aspects of traditional strategies and tactics. If AI is delegated a measure of control over cyber weapons (offensive or defensive) or physical weapons such as aircraft, it may rapidly conduct functions that humans carry out only with difficulty.
59%
Flag icon
Most traditional military strategies and tactics have been based on the assumption of a human adversary whose conduct and decision-making calculus fit within a recognizable framework or have been defined by experience and conventional wisdom. Yet an AI piloting an aircraft or scanning for targets follows its own logic, which may be inscrutable to an adversary and unsusceptible to traditional signals and feints—and which will, in most cases, proceed faster than the speed of human thought.
59%
Flag icon
Because AIs are dynamic and emergent, even those powers creating or wielding an AI-designed or AI-operated weapon may not know exactly how powerful it is or exactly what it will do in a given situation. How does one develop a strategy—offensive or defensive—for something that perceives aspects of the environment that humans may not, or may not as quickly, and that can learn and change through processes that, in some cases, exceed the pace or range of human thought?
59%
Flag icon
Because of AI’s potential to adapt in response to the phenomena it encounters, when two AI weapons systems are deployed against each other, neither side is likely to have a precise understanding of the results their interaction will generate or their collateral effects. They may discern only imprecisely the other’s capabilities and penalties for entering into a conflict. For engineers and builders, these limitations may put premiums on speed, breadth of effects, and endurance—attributes that may make conflicts more intense and widely felt, and above all, more unpredictable.
59%
Flag icon
AI and machine learning will change actors’ strategic and tactical options by expanding the capabilities of existing classes of weapons. Not only can AI enable conventional weapons to be targeted more precisely, it can also enable them to be targeted in new and unconventional ways—such as (at least in theory) at a particular individual or object rather than a location.13 By poring through vast amounts of information, AI cyber weapons can learn how to penetrate defenses without requiring humans to discover software flaws that can be exploited. By the same token, AI can also be used defensively, ...more
60%
Flag icon
As transformative AI capabilities evolve and spread, major nations will, in the absence of verifiable restraints, continue to strive to achieve a superior position.15 They will assume that proliferation of AI is bound to occur once useful new AI capabilities are introduced. As a result, aided by such technology’s dual civilian and military use and its ease of copying and transmission, AI’s fundamentals and key innovations will be, in significant measure, public.
60%
Flag icon
The most revolutionary and unpredictable effect may occur at the point where AI and human intelligence encounter each other. Historically, countries planning for battle have been able to understand, if imperfectly, their adversaries’ doctrines, tactics, and strategic psychology. This has permitted the development of adversarial strategies and tactics as well as a symbolic language of demonstrative military actions, such as intercepting a jet nearing a border or sailing a vessel through a contested waterway. Yet where a military uses AI to plan or target—or even assist dynamically during a ...more
61%
Flag icon
The quest for reassurance and restraint will have to contend with the dynamic nature of AI. Once they are released into the world, AI-facilitated cyber weapons may be able to adapt and learn well beyond their intended targets; the very capabilities of the weapon might change as AI reacts to its environment. If weapons are able to change in ways different in scope or kind from what their creators anticipated or threatened, calculations of deterrence and escalation may turn illusory. Because of this, the range of activities an AI is capable of undertaking, both at the initial design phase and ...more
62%
Flag icon
Beyond AI-enabled defense systems lies the most vexing category of capabilities—lethal autonomous weapons systems—generally understood to include systems that, once activated, can select and engage targets without further human intervention.16 The key issue in this domain is human oversight and the capability of timely human intervention.
62%
Flag icon
An autonomous system may have a human “on the loop,” monitoring its activities passively, or “in the loop,” with human authorization required for certain actions. Unless restricted by mutual agreement that is observed and verifiable, the latter form of weapons system may eventually encompass entire strategies and objectives—such as defending a border or achieving a particular outcome against an adversary—and operate without substantial human involvement.
62%
Flag icon
To prevent unintended escalation, major powers should pursue their competition within a framework of verifiable limits. Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing. But both sides must expect (and plan accordingly) that the other will withhold its most security-sensitive secrets. There will never be complete trust. But as nuclear arms negotiations during the Cold War demonstrated, that does not mean that no measure of understanding can be achieved.
62%
Flag icon
For all their benefits, the treaties (and the accompanying mechanisms of communication, enforcement, and verification) that came to define the nuclear age were not historical inevitabilities. They were the products of human agency and a mutual recognition of peril—and responsibility.
62%
Flag icon
Three qualities have traditionally facilitated the separation of military and civilian domains: technological differentiation, concentrated control, and magnitude of effect. Technologies with either exclusively military or exclusively civilian applications are described as differentiated. Concentrated control refers to technologies that a government can easily manage as opposed to technologies that spread easily and thereby escape government control. Finally, the magnitude of effect refers to a technology’s destructive potential. Throughout history, many technologies have been dual-use. Others ...more
63%
Flag icon
AI breaks this paradigm. It is emphatically dual-use. It spreads easily—being, in essence, no more than lines of code: most algorithms (with some noteworthy exceptions) can be run on single computers or small networks, meaning that governments have difficulty controlling the technology by controlling the infrastructure. Finally, AI applications have substantial destructive potential.
63%
Flag icon
AI-enabled weapons may allow adversaries to launch digital assaults with exceptional speed, dramatically accelerating the human capacity to exploit digital vulnerabilities. As such, a state may effectively have no time to evaluate the signs of an incoming attack. Instead, it may need to respond immediately or risk disablement.18 If a state has the means, it may elect to respond nearly simultaneously, before the attack can occur fully, constructing an AI-enabled system to scan for attacks and empowering it to counterattack.
63%
Flag icon
Attempts to incorporate these new capabilities into a defined concept of strategy and international equilibrium is complicated by the fact that the expertise required for technological preeminence is no longer concentrated exclusively in government. A wide range of actors and institutions participate in shaping technology with strategic implications—from traditional government contractors to individual inventors, entrepreneurs, start-ups, and private research laboratories. Not all will regard their missions as inherently compatible with national objectives as defined by the federal government. ...more
63%
Flag icon
The unresolved challenge of the nuclear age was that humanity developed a technology for which strategists could find no viable operational doctrine. The dilemma of the AI age will be different: its defining technology will be widely acquired, mastered, and employed. The achievement of mutual strategic restraint—or even achieving a common definition of restraint—will be more difficult than ever before, both conceptually and practically.
64%
Flag icon
In this age, deterrence will likely arise from complexity—from the multiplicity of vectors through which an AI-enabled attack is able to travel and from the speed of potential AI responses.
64%
Flag icon
Before weapons are deployed, strategists must understand the iterative effect of their use, the potential for escalation, and the avenues for deescalation. A strategy of responsible use, complete with restraining principles, is essential. Policy makers should endeavor to simultaneously address armament, defensive technologies and strategies, together with arms control rather than considering them as chronologically distinct and functionally antagonistic steps. Doctrines must be formulated and decisions must be made in advance of use.
64%
Flag icon
In a decision that has partly foreseen this challenge, the United States has distinguished between AI-enabled weapons, which make human-conducted war more precise, more lethal, and more efficient, and AI weapons, which make lethal decisions autonomously from human operators. The United States has declared its aim to restrict use to the first category. It aspires to a world in which no one, not even the United States itself, possesses the second.
65%
Flag icon
Each major technologically advanced country needs to understand that it is on the threshold of a strategic transformation as consequential as the advent of nuclear weapons—but with effects that will be more diverse, diffuse, and unpredictable.
65%
Flag icon
In earlier eras, only a handful of great powers or superpowers bore responsibility for restraining their destructive capabilities and avoiding catastrophe. Soon, proliferation may lead to many more actors assuming a similar task.
65%
Flag icon
Leaders of this era can aspire toward six primary tasks in the control of their arsenals,
65%
Flag icon
First, leaders of rival and adversarial nations must be prepared to speak to one another regularly, as their predecessors did during the Cold War, about the forms of war they do not wish to fight.
65%
Flag icon
Second, the unsolved riddles of nuclear strategy must be given new attention and recognized for what they are—one of the great human strategic, technical, and moral challenges.
65%
Flag icon
Third, leading cyber and AI powers should endeavor to define their doctrines and limits (even if not all aspects of them are publicly announced) and identify points of correspondence between their doctrines and those of rival powers.
65%
Flag icon
Fourth, nuclear-weapons states should commit to conducting their own internal reviews of their command-and-control and early warning systems.
66%
Flag icon
Fifth, countries—especially the major technological ones—should create robust and accepted methods of maximizing decision time during periods of heightened tension and in extreme situations.
66%
Flag icon
Finally, the major AI powers should consider how to limit continued proliferation of military AI or whether to undertake a systemic nonproliferation effort backed by diplomacy and the threat of force. Who are the aspiring acquirers of the technology that would use it for unacceptable destructive purposes? What specific AI weapons warrant this concern? And who will enforce the redline? The established nuclear powers explored such a concept for nuclear proliferation, with mixed success.
66%
Flag icon
A discussion of cyber and AI weapons among major powers must be undertaken, if only to develop a common vocabulary of strategic concepts and some sense of one another’s redlines.
67%
Flag icon
Nearly any person with a primary education can do a reasonable job of predicting possible completions of a sentence. But writing documents and code, which GPT-3 can do, requires sophisticated skills that humans spend years developing in higher education. Generative models, then, are beginning to challenge our belief that tasks such as sentence completion are distinct from, and simpler than, writing. As generative models improve, AI stands to lead to new perceptions of both the uniqueness and the relative value of human capabilities. Where will that leave us?
69%
Flag icon
Whatever AI’s long-term effects prove to be, in the short term, the technology will revolutionize certain economic segments, professions, and identities. Societies need to be ready to supply the displaced not only with alternative sources of income but also with alternative sources of fulfillment.
Todd Mundt
Fairly dismissive considering that it mayor may not be a big problem.
70%
Flag icon
Some segments of society may go further, insisting on remaining “physicalists” rather than “virtualists.” Like the Amish and the Mennonites, some individuals may reject AI entirely, planting themselves firmly in a world of faith and reason alone. But as AI becomes increasingly prevalent, disconnection will become an increasingly lonely journey.
70%
Flag icon
While amino-acid sequences can be quite useful for studying proteins, they fail to capture one critical aspect of those proteins: the three-dimensional structure that is formed by the chain of amino acids. One can think of proteins as complex shapes that need to fit together in three-dimensional space, much like a lock and key, in order for particular biological or chemical outcomes—such as the progression of a disease or its cure—to occur. The structure of a protein can, in some cases, be measured through painstaking experimental methods such as crystallography. But in many cases, the methods ...more
71%
Flag icon
In the future, children may grow up with AI assistants, more advanced than Alexas and Google Homes, that will be many things at once: babysitter, tutor, adviser, friend. Such an assistant will be able to teach children virtually any language or train children in any subject, calibrating its style to individual students’ performance and learning styles to bring out their best. AI may serve as a playmate when a child is bored and as a monitor when a child’s parent is away.
72%
Flag icon
Just as parents a generation ago limited television time and parents today limit screen time, parents in the future may limit AI time. But those who want to push their children to succeed, or who lack the inclination or ability to replace AI with a human parent or tutor—or who simply want to satisfy their children’s desire to have AI friends—may sanction AI companionship for their children.
72%
Flag icon
The irony is that even as digitization is making an increasing amount of information available, it is diminishing the space required for deep, concentrated thought. Today’s near-constant stream of media increases the cost, and thus decreases the frequency, of contemplation. Algorithms promote what seizes attention in response to the human desire for stimulation—and what seizes attention is often the dramatic, the surprising, and the emotional. Whether an individual can find space in this environment for careful thought is one matter. Another is that the now-dominant forms of communication are ...more
73%
Flag icon
Given the pressures for deployment, limitations on AI uses that are, on their face, desirable will need to be formulated at a society-wide or international level.
74%
Flag icon
Free speech needs to be continued for humans but not extended to AI. As we said in chapter 4, AI has the capacity to generate, both in high quality and large volume, misinformation such as deep fakes, which are very difficult to distinguish from real video and audio recordings.
75%
Flag icon
To chart the frontiers of contemporary knowledge, we may task AI to probe realms we cannot enter; it may return with patterns or predictions we do not fully grasp. The prognostications of the Gnostic philosophers, of an inner reality beyond ordinary human experience, may prove newly significant. We may find ourselves one step closer to the concept of pure knowledge, less limited by the structure of our minds and the patterns of conventional human thought. Not only will we have to redefine our roles as something other than the sole knower of reality, we will also have to redefine the very ...more
77%
Flag icon
AI will open unprecedented vistas of knowledge and understanding. Alternatively, its discovery of patterns in masses of data may produce a set of maxims that become accepted as orthodoxy across continental and global network platforms. This, in turn, may diminish humans’ capacity for skeptical inquiry that has defined the current epoch. Further, it may channel certain societies and network-platform communities into separate and contradictory branches of reality.
78%
Flag icon
AI subtracts. It hastens dynamics that erode human reason as we have come to understand it: social media, which diminishes the space for reflection, and online searching, which decreases the impetus for conceptualization. Pre-AI algorithms were good at delivering “addictive” content to humans. AI is excellent at it. As deep reading and analysis contracts, so, too, do the traditional rewards for undertaking these processes. As the cost of opting out of the digital domain increases, its ability to affect human thought—to convince, to steer, to divert—grows. As a consequence, the individual ...more
78%
Flag icon
Yet in the worlds of media, politics, discourse, and entertainment, AI will reshape information to conform to our preferences—potentially confirming and deepening biases and, in so doing, narrowing access to and agreement upon an objective truth. In the age of AI, then, human reason will find itself both augmented and diminished.
« Prev 1 2 Next »