He believed that AI was more likely than anyone realized to annihilate humanity. Once it got to a certain level of intelligence, for instance, AI could strategically hide its capabilities until it was too late for humans to control its actions. It could then manipulate financial markets, take control of communications networks, or disable critical infrastructure like electrical grids. The people who were building AI often had no idea that they were bringing the world closer and closer to its destruction, Yudkowsky wrote.