Wren

20%
Flag icon
So we build the AI in a secure and isolated computer that can’t start taking over random systems or anything—a box, if you will. The question is this: is that safe? Yudkowsky argues that it is not, because a superintelligent AI would be able to talk its way out of the box. Or, to offer the hypothesis in his precise formulation, “I think a transhuman can take over a human mind through a text-only terminal.”92
Neoreaction a Basilisk: Essays on and Around the Alt-Right
Rate this book
Clear rating
Open Preview