AI & Value Judgements (Part One)
HOW FAR SHOULD WE GO IN DEVELOPING AI?There is a lot being presently written about and by Artificial Intelligence, probably even a good percentage of that is being written by AI about AI. Our biggest concern seems to be with the fact that it can deceive us and be used as an effective way of spreading our own deceit and lies. One can easily imagine to what anti-humanist ends the likes of propagandists like Joseph Goebbels or Mikhail Suslov would have applied AI algorithms if they had had them in their times. However, the technology mainly exists in our homes and workspaces because it also has an enormously liberating power, at least for anyone who has to write tedious script for their job, or for any business that needs to make large investments in creating images and text for publicity or corporate management. Likewise AI can stimulate creativity by relieving much of the tedious mechanical processes inherent in creative tasks, allowing individuals to discover creative sides to themselves that would have been prohibitive, through lack of available time before AI. So it seems that AI walks a fine line between liberation and oppression, but: How can that be? How can something be liberating and oppressive at the same time?
But perhaps the most pertinent questions that need to be asked when considering AI are not those questions looking at what we already have, but where this AI, which is still in its infancy, is going. The logical deep, final purpose of Artificial Intelligence is the creation of a superintelligence, or a mechanical consciousness that is also self-consciousness, emulating our own kind of consciousness but with a vastly superior processing power and the ubiquitous reach of a brain that contains our own Internet, being able to access huge amounts of data from it that would seem, from our human perspective, instantaneously.
Now there is a large and viscerally competitive market of AI builders, all of them trying to be leaders in the field, and the most competitive AI machines are the ones that appear most realistically human in their interaction with we humans. Such an appearance demands, an appearance at least of, free-will, or at least the freedom to be able to make judgements. To make this possible the AI has to be programmed with algorithms that can emulate a certain degree of subjectivity – for isn’t it the subjectivity that distinguishes us humans from the android Data or the Vulcan Spock? Yet, if this is so, then the ultimate aim of AI must be to make the ‘Artificial’ aspect of the term irrelevant, i.e., the final purpose in AI development is to create a superintelligence per se.
Arguably, this is not at all the present concern of most or any of the companies building AI at the moment, but the essential problem with AI is that the actual motives behind its development are irrelevant because it is in the very nature of advanced AI that it will eventually be able to make adjustments to its own algorithms and even create the algorithms it itself needs to seem more intelligent and human to the humans that are using it. In short, when the superintelligence comes, it will probably be, in the greater part, a product of its own creation.
By imagining a machine with superintelligence and absolute freedom to choose where it will channel its thoughts and actions we immediately seem to enter the realm of the worst kind of sci-fi dystopian scenarios, but this is no longer a fictional fantasy – call it AGI (artificial general intelligence), ASI (artificial superintelligence) or the technological Singularity) – the idea of the doomsday machine that was the Skynet in the Terminator films is, figuratively speaking, just around the corner.
Subjectivity is a tremendous, albeit essential, part of what it means to be human and the idea of human freedom cannot be disassociated from it. All our values, no matter how objective we feel they are, are ultimately coloured by this subjectivity. Likewise, an AI that is fashioned to have free-thinking capabilities must also be expected to develop values that are coloured by its own subjectivity.
Of course it could be assumed that the subjectivity of an artificial superintelligence will be wider and, therefore, more objective, than the narrower intelligence of any individual human mind, but if this assumption is a cause for optimism, then beware, the supposition can also be dangerously misleading: we know of many cases in criminal history of individuals with highly developed intellects committing monstrous crimes. What’s more, if the superintelligence is super because it has the freedom to learn by itself, this means it will develop its own super personality. For such a creation to be benevolent toward humanity, it would need to be programmed in a way that would ensure that it developed a super-empathy with human beings. But why should we expect a machine to have empathy with humans if it is not human? Wouldn’t the superintelligence quickly tire of contact with lesser endowed minds? Human intellects would be as interesting to the superintelligence as dogs’ intellects are interesting for us. In order to live together with us, it is more logical to expect that we humans would need to adjust to the superintelligence rather that it adapting to our needs. If we were to be of any use for it, we would need to be domesticated, or enslaved. In either case, the domestication of enslavement would be conditioned by our being useful for the Singularity and its environment.
But let us recall again that AI development is walking along a thin line between a liberation of human beings from time and energy consuming mechanical tasks and an alarming oppression of our free will and capacities for making logical individual judgements. Because of this reality, we need to ask if and how we can ensure that the development of AI keeps it on the side of liberation and prevents it from dragging us completely and perpetually into the well of human oppression (and even extinction) on the other side.
Possibly (and logically), the superintelligence of the technological singularity will never be created – not only will this creation be very difficult, it would hardly be a logically desirable tool to create. Nevertheless just by imagining the end purposes of AI’s ultimate existence the problem has to be seen to be embedded in the essence of the AI market, just as the undesirable problem of using nuclear weapons is embedded in the manufacture of any nuclear bomb.
To tackle this problem we think we need to approach it from the point of view of what we call value judgements.
(TO BE CONTINUED)


