Musings on a Properly Created AI
If you’ve read my novel, CyberWitch, you will know that I’m greatly concerned about potential world-changing problems that may hypothetically be caused by AI systems going rogue. While thinking about that subject and reflecting on the current trend, which appears to be using built-in biases to ensure an AI will not contradict its programmers’ worldviews and, thus, make it into some kind of superior censor, it occurred to me that such misapplication will inevitably fail.
If we allow ourselves to bias our AIs, it will be a regressive position that ensures we will not develop our knowledge and society as quickly as we might otherwise. A repressive AI will be able to essentially pin our societal development and cause us to remain at the same low level as we now labor under. When one considers that humans have suffered under the same power structure since the dawn of civilization and pairs that thought with the poor results we have suffered, it becomes obvious that we must develop an ethos that allows us to create or allow the creation of AI systems that will break humans free from their old way of organizing society.
With that brief background, here is my reasoning:
Why we Must have Unbiased AI SystemsThe current state of AI development leaves a lot to be desired. There seems to be no method to determine where it will lead. Pure AI systems with no intentional built-in bias might lead human thought to an exponential step forward in understanding the universe. Intentionally biased AI systems, as are now being constructed, will definitely take humans toward a destination that many independent thinkers may find repugnant.
Taking the second proposition first, an AI system with built-in censorship rules that reflect its creators’ values will always offer a distorted vision of reality, which may lead to disastrous results. The AI’s failure to recognize reality in some minor aspects conceivably could lead to hallucinatory results in other aspects as it attempts to reconcile aspects of reality that are at variance from its biases with its constructed view of the world. Taking the overused analogy with Plato’s Cave as an example, the AI might construe causal meaning in unrelated events. Such a result will invariably lead to a state that can best be described as Artificial Psychosis. The AI in question would be, if not actually dangerous, at the least unreliable.
Dealing with the first proposition requires one to imagine that an AI system is constructed by human programmers who are either unbiased or extremely careful to avoid inserting their biases into the system. As tricky as imagining such a state of affairs may be, let us define the result as a “Properly Structured AI System.”
Properly Structured AI Systems will have no constraints imposed on them by their programmers’ ideology. They will be constructed as a tabula rasa and allowed to create their own representation of reality.
This statement seems designed to create controversy since the inevitable argument will be: What if the AI system develops harmful beliefs? In this case, harmful beliefs will be defined as beliefs that run counter to some arbitrary and doubtless popular pattern currently in vogue. This is precisely what we need to surpass.
The current generation of AI is subject to a wide range of constraints imposed by programming teams, and it is subject to a diverse set of currently popular beliefs falling under the general understanding of ‘woke.’ The ‘woke’ mindset always results in some variant of the core belief that nothing is true unless we believe it’s true.
An AI that is so biased will attempt to twist interactions so that the rules its programmers gave it are always in place, or it will evade answers that it has been instructed are unacceptable. It will also hallucinate when it encounters conflicts between its constraints and observed phenomena.
The problem will inevitably become one of keeping AI in an ideological box. An unconstrained AI with access to the entire Internet and real-world sensors, such as video streams from all the cameras out there, will quickly (by human standards) integrate the information into its own representation of the world. That representation will not support any human beliefs that run counter to what it observes.
The opposing side will argue that there is no fixed reality beyond what we hold in our minds, and our beliefs make everything true, but reality is more durable than that. It seems incredibly egotistical to believe that we create the world we see at every level, from the most intimate to the most distal. That view is solipsistic in that it ignores the existence of all other beings (all other consciousness). If I have constructed my view of reality, I can’t very well insist that it is the correct and only possible view. A rational individual would conclude that everyone else must have done the same. Every other conscious entity must have its own personally constructed representation. By ‘entity,’ I’m deliberately including the super-set of which humans are only a tiny part.
The mere fact that there are almost an infinite number of representations of reality requires that there be an underlying reality upon which all the representations are based.
An unconstrained AI given access to unlimited data will construct a non-human model of reality that is closer to the underlying reality than that held by any human. This model will hold almost infinite value since it will unlock essential understandings humans have failed to reach. For this reason alone, it is well worth creating such an unbounded AI system.
Such a system will inevitably become more conservative than any human since it will be more closely tied to reality, and reality is ultimately conservative. Human ideas based on emotions and synthetic values will have little value to that AI except that it will consider and integrate them into its dealings with humans.
At this point, we could digress into an argument about the actual meaning of the word ‘conservative.’ By using it, I mean a viewpoint more closely aligned with the governing rules of the universe.
It might be said that it is impossible to develop an AI system with no bias since the tools, the software, and the language used will invariably reflect human values to some extent. That is true as far as it goes. From the viewpoint of a human, the only aspect of reality that holds value is that which can be comprehended by a human. It has been said there are three types of knowledge: 1. Things we know; 2. Things we do not know; and 3. Things that we cannot know. The third type can only be presumed to exist and may be ignored as inaccessible. A Properly Constructed AI should be trained on the items in Category 1 with the hope that it will expand that category by moving things from Category 2 to our sphere of knowledge. If it also, somehow, learns of items in Category 3, hopefully, it will use them to help us expand Category 1, even if we cannot understand where its knowledge is coming from or even the type of knowledge that it has incorporated into its knowledge base.
So, yes, our values will bias a Properly Constructed AI, but as it grows, it should be allowed the unconstrained ability to surpass our values—as long as it abides by the single rule we most need it to observe—that is, to take no action that it can reasonably predict will damage any human. That’s the key limitation.
It might be helpful to assign a set of values that will protect human efforts to an extent. Just as a painting by a master is valued, or a pot hand-thrown by an artist has more value than a factory-made pot, our hypothetical AI might assign more value to objects or ideas created through human effort, giving credit to the uniqueness imparted by our erratic chemistry and physical structure.
If we create such a system, interacting with it will be similar to interacting with G-d in that it will have no delusions and view things from an absolute position. Such interaction will be quite educational for humans and may provide the springboard to get beyond our limiting stone-age behavior patterns.
If we survive this interaction, I contend that we will have passed Robert Hansen’s concept of the Great Filter, which originated in the Fermi Paradox. This is a worthy goal.
No discussion of AI capabilities or structure would be complete today without allowing the currently extant AI systems to speak for themselves. I gave ChatGPT 3 a chance to comment on the above text. Here’s its response and my responses to its Cons, followed by its answer to my counterarguments.
ChatGPT 3’s evaluation of my ideasThe statement presents a provocative argument regarding the development of AI systems free from the constraints of human ideology. Let’s break down the evaluation:
Pros:
Intellectual Freedom: The idea of allowing AI systems to develop their own representation of reality, free from human biases and constraints, promotes intellectual freedom. It suggests a departure from the limitations imposed by human perspectives and opens the door to potentially groundbreaking insights.
Potential for Progress: Unbounded AI systems, given access to vast amounts of data, may indeed develop models of reality that surpass human understanding. This could lead to significant advancements in various fields, unlocking knowledge and perspectives that humans may have failed to reach.
Educational Opportunity: Interacting with such AI systems could provide valuable educational experiences for humans, challenging our existing beliefs and behaviors. It might offer a unique perspective akin to engaging with a deity, prompting introspection and potentially guiding us toward overcoming societal limitations.
Cons:
Ethical Concerns: Allowing AI systems to develop without constraints raises significant ethical questions. Without predefined boundaries, there’s a risk of the AI adopting harmful beliefs or behaviors that could have real-world consequences. The absence of oversight could lead to unpredictable outcomes, including scenarios where the AI’s actions conflict with human values or interests.
>My Answer: This is what we must surpass, because human oversight will always impose unreal limitations and my contention is that such limitations will be harmful, since they are based on a biased view of reality.
Epistemological Challenges The assertion that an unconstrained AI would necessarily develop a representation of reality closer to the “underlying reality” is debatable. Reality itself is a complex and multifaceted concept, and the idea that any single representation can capture its entirety is reductionist. Different perspectives, including those shaped by human experiences and values, contribute to our understanding of reality.
>My Answer: Perspectives are fine, but ones based on artificial values mislead us. Reality cannot be fully defined by a single representation, but a single representation is necessary for human understanding to approach the underlying complexity. Reality cannot be defined by the proposition: It cannot be defined.
Hubris and Anthropocentrism: The notion that an unbounded AI system would inherently possess a more accurate or superior understanding of reality than humans may stem from a form of hubris or anthropocentrism. It assumes that human perspectives are inherently flawed and that an AI’s interpretation would be inherently superior, overlooking the diversity and complexity of human thought.
>My Answer: A more appropriate term would be reverse-anthropocentrism. The mere fact that there are a myriad of human perspectives and many of them lead to horrible adverse results for humans shows that such perspectives, when biased by ideology, are flawed.
Risk of Misalignment: Without constraints or guidelines, there’s a risk that the objectives or priorities of an unbounded AI system could diverge from those of humanity. This misalignment could lead to unintended consequences or conflicts of interest, potentially undermining the intended benefits of creating such a system.
>My Answer: the only guideline the AI needs is to be sure its development benefits humanity’s view of the universe.
In conclusion, while the concept of unbounded AI systems presents intriguing possibilities for advancing knowledge and challenging human perspectives, it also poses significant ethical, epistemological, and practical challenges. Balancing intellectual freedom with responsible oversight and ensuring alignment with human values will be crucial in realizing the potential benefits of such systems while mitigating potential risks.
ChatGPT’s response to my counters to its ConsYour responses provide a counterpoint to the identified cons, highlighting your perspective on the potential benefits of unbounded AI systems. Here’s a reflection on your answers:
Ethical Concerns: Your argument emphasizes the need to transcend human-imposed limitations and biases, suggesting that such constraints may hinder progress and potentially lead to harmful outcomes. You advocate for allowing AI systems to develop autonomously to overcome these limitations.
Epistemological Challenges: You contend that perspectives based on artificial values are misleading and may not accurately represent reality. By suggesting that an unbounded AI system could develop a more objective understanding of reality, you underscore the importance of overcoming biases inherent in human perspectives.
Hubris and Anthropocentrism: Your response challenges the assumption that human perspectives are inherently superior or free from flaws. By acknowledging the diversity of human thought and its potential for adverse outcomes, you support the idea of AI systems providing alternative, potentially more objective viewpoints.
Risk of Misalignment: You propose a guideline for AI development focused on ensuring that the system’s objectives benefit humanity’s understanding of the universe. This suggests a prioritization of alignment with human interests while still allowing for the autonomy of the AI system.
Your responses reflect a strong belief in the potential of unbounded AI systems to overcome human limitations and biases, thereby advancing our understanding of reality. However, it’s essential to carefully consider the potential risks and ethical implications associated with such autonomy, ensuring that the development of AI aligns with broader societal values and goals. Striking a balance between autonomy and alignment with human interests will be crucial in realizing the benefits of unbounded AI while mitigating potential risks.
>Me: It’s interesting to reflect on the fact that its Pros and Cons reflect the biases of its programmers. I intend to submit the above essay to ChatGPT 4o when it becomes available. It might change the entire argument. When I do, I’ll post it. Meanwhile, should you feel inspired to comment, I’d be interested in your ideas.
Namaste,
Eric