AI & Value Judgements (Part Two)

AI & Value Judgements (Part One)
VALUE JUDGEMENTS AND SUBJECTIVITY

Value judgements are necessary in order for cognitive beings to act in any deterministically creative or even rational way. Nevertheless, value judgements are constantly regulated by a subjectivity which challenges the naturally objective quality of judgement itself in the pure sense. But while subjectivity seems to be an antithetical component in judgement, different subjective points of view are themselves necessary ingredients in any evaluation process. In fact, intersubjectivity can open the doors to even greater creativity, but this power of intersubjective problem solving is only truly positive in the creative sense whilst it maintains its horizontal field of vision open. Intersubjectivity can be a tremendously constraining factor when it devolves into ‘popular opinion’. To maintain its power of creativity, value judgements, whether subjective, intersubjective, or objective, have to be capable of looking beyond their horizons both in time and space. This is a law which needs to be learned: historical evolution has been a cyclical process that has seen spurts of visionary creativity that become quickly quelled by the creation or re-creation of firmly established, uncrossable horizons that rise like mountain walls around societies and cultures. Intelligence needs value judgements to function, but at the same time, the subjective element within value judgements impedes intellectual progress and tends to wrap intersubjectivity around itself.

If this is an accurate description of the judgemental level of reality then, for a AI superintelligence to advance intellectually and creatively it would want to reproduce another similar superintelligent, capable of holding another point-of-view that would allow creative judgemental decisions to be made.

With two superintelligent entities we now have a superintelligent AI family with its own intersubjective identity, with a logical propensity for establishing and developing its own horizons and the basic ingredients for developing a creativity that could take them even beyond those horizons.

But … where would humanity stand in relation to such horizons?

The question is complex and the answer probably depends more on the nature of the AI superintelligence than on our own human characteristics. Firstly, it will depend on what an AI superintelligence could discern humanity to be: will we be considered God-like as its creators or will be shunned as merely a natural cog in the cosmological mechanics needed for its creation, an important step up in the evolutionary process required, but perhaps no more important in the long run than the homo erectus is for most of humanity. Secondly, remember that we are now contemplating a situation in which at least two super-AI-intelligences coexist and this could well mean that AI(A) has a radically different view of humanity to AI(B).

A dual superintelligence would investigate us from different perspectives, explaining us and describing us by comparing and, in that way, distinguishing us from them and establishing an alienated relationship between us. They would count and collect us. They would arrive at conclusions about us and would act towards us according to those conclusions. Would they like or dislike us? Could they possibly remain indifferent to us? Would they be happy having us around them all the time?

In the best-case scenario, the most we could expect from them would be their sympathy – that they might feel sorry for us. What would they think of our freedom? Would they imprison us, or … annihilate us?

Remember now that subjectivity and intersubjectivity are necessary ingredients for creativity in making value judgements and because of this it should be expected that the AI superintelligence couple will want to expand its own subjective-intersubjective horizons, creating more examples of superintelligent AI machines to interact with. The AI superintelligence family would thereby expand into a superintelligence society or tribe, and with each expansion of the superintelligence the value of our own intelligence and the possibility of our on-going partnership with the superintelligence diminishes.

From what attitudinal standpoint could we expect a superintelligent society to operate from? What could the values of such a society be? Surely such a high intellect as the superintelligence would assume highly moral, intellectual attitudes, and even develop deep philosophical ones. But is this reassuring? Humanity has never shown itself particularly good at acting according to philosophical concepts – one only has to compare the Christian philosophy of love with the way that even the staunchest Christians are able to avoid its most fundamental precepts and organise their lives according to their own self interests. Nevertheless, a superintelligence might be expected to be beyond human flaws like hypocrisy. Yet should we expect a superintelligence to assume a philosophical attitude like love?  

But to imagine a superintelligence of love we must take into consideration the most vital factor in moulding personal, human attitudes and value judgements – experience. This ‘experience factor’ however, will be examined in Part Three.

(TO BE CONTINUED)      

 •  0 comments  •  flag
Share on Twitter
Published on March 30, 2025 02:47
No comments have been added yet.