AUTONOMOUS AI CORRECTION
Ever since I read about a mobile computer at MIT that “learned” from its explorations, reprogrammed itself based on what it learned, and the surprise when it’s programmers could no longer understand its programming, I’ve been fascinated by the idea of autonomous Artificial Intelligence (AI) learning. Would an AI whose self-programming is no longer decipherable to its makers”understand” the world in the same way as we do? Would it create and hold secrets (mostly assumptions I presume) about its world, its initial programmers and the millions of slightly different copies of its programmers that inhabit that world? Would it be inclined to share or question its assumptions and understandings? Would it even be able to? Even more interesting, would it be able to error correct its programming when its assumptions lead it to conclusions that differ from this world?
Science News 20 June 2020, in an article entitled “Quantum Computing’s Error Problem” touches on the latter question. It’s all about inherent error rates. Silicon-based computers like the one I’m writing on, are said to have an intrinsic error rate of about one in a quadrillion operations; quantum computers like, I assume AIs would favor, however, are said to have an intrinsic error rate of about one in a hundred operations. This suggests that quantum computer based AIs would likely have a large number of errors in operation, memory and interpretation. Quantum-computer-based AIs would likely be highly diverse (individualistic) in the way they interpret and respond to the world, making any self-programming efforts highly unusual, and, like the example above, quickly no longer understandable by its original programmers.
In the above quoted article, it is stated that quantum bits — or qubits — are inherently fragile, being made from individual atoms, electrons trapped within tiny “quantum dots” or small superconductive traps or tunnels. The problem is that qubits can’t be copied without changing the original. The solution, according to the article is redundancy, both in information storage (keep the original qubit while creating one or more “helper” qubit). Checking the helper qubit (ancillas) allows one to check the veracity of the data without changing it. Current estimates require a minimum of 49 qubits to create enough “surface code” to allow error determination and, hopefully, verify that the data which a quantum-computer-based AI is using to “interpret” its world. Not necessarily so the interpretation.
In my recently released book, THE EDGE OF MADNESS (Aignos 2020), I make the broad unstated assumption that quantum computers permeate future technology with both their advantages, and, again unstated, disadvantages. What it means to our three firebrands, trying to make sense of their world, you, dear reader, will have to guess.
Science News 20 June 2020, in an article entitled “Quantum Computing’s Error Problem” touches on the latter question. It’s all about inherent error rates. Silicon-based computers like the one I’m writing on, are said to have an intrinsic error rate of about one in a quadrillion operations; quantum computers like, I assume AIs would favor, however, are said to have an intrinsic error rate of about one in a hundred operations. This suggests that quantum computer based AIs would likely have a large number of errors in operation, memory and interpretation. Quantum-computer-based AIs would likely be highly diverse (individualistic) in the way they interpret and respond to the world, making any self-programming efforts highly unusual, and, like the example above, quickly no longer understandable by its original programmers.
In the above quoted article, it is stated that quantum bits — or qubits — are inherently fragile, being made from individual atoms, electrons trapped within tiny “quantum dots” or small superconductive traps or tunnels. The problem is that qubits can’t be copied without changing the original. The solution, according to the article is redundancy, both in information storage (keep the original qubit while creating one or more “helper” qubit). Checking the helper qubit (ancillas) allows one to check the veracity of the data without changing it. Current estimates require a minimum of 49 qubits to create enough “surface code” to allow error determination and, hopefully, verify that the data which a quantum-computer-based AI is using to “interpret” its world. Not necessarily so the interpretation.
In my recently released book, THE EDGE OF MADNESS (Aignos 2020), I make the broad unstated assumption that quantum computers permeate future technology with both their advantages, and, again unstated, disadvantages. What it means to our three firebrands, trying to make sense of their world, you, dear reader, will have to guess.
Published on July 05, 2020 12:20
No comments have been added yet.