The logic here is that a friendly AI that wants to save humanity from itself would want to make sure it comes into being, and so would try to ensure this by threatening to take anyone who imagined its existence and then failed to bring it about and torture a simulation of them for all eternity, which, due to the Yudkowskian interpretation of the many-worlds hypothesis, is equivalent to torturing the actual person. And so upon thinking of this AI you are immediately compelled to donate all of your income to trying to bring it about.

