the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
In a nutshell. Problem is people can't only hear this and get worried. It doesn't sound ominous becuase we assume we can create goals that align to our own. People don't really take the genie problem seriously. Terminator still scares more people.