Why the Terminator Conundrum Requires Active Anti-proliferation Policies
In this excellent article, ‘the Pentagon’s Terminator Conundrum – robots that could kill on their own‘, the issue now faced by weapons developers is explained clearly.
While the development of drones and robots that could take themselves the decision to engage targets becomes closer, the issue of whether to develop such system becomes a conundrum. It is important to be able to face such a possible threat, at the same time usage of this type of weapon will need to remain very much controlled. Mechanisms similar to control of nuclear proliferation or chemical weapons might need to be put in place – with the particular challenge that no huge and noticeable industrial complex will be needed to produce such weapons.
The Open Letter by concerned scientists on autonomous weapons is interesting to read. It states “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
At the same time, military might contend with enhancing human capabilities by teaming humans with robots, in particular to be able to take decisions in uncertain situations. But the issue needs to be tackled quickly because the consequences of robots engaging without control could become a proliferation issue.


