Back in 2008, Shane Legg described this attitude in his thesis,19 arguing that although the risks were great, so were the potential rewards. “If there is ever to be something approaching absolute power,20 a super intelligent machine would come close. By definition, it would be capable of achieving a vast range of goals in a wide range of environments,” he wrote. “If we carefully prepare for this possibility in advance, not only might we avert disaster, we might bring about an age of prosperity unlike anything seen before.”

