Longtermist thinking isn’t isolated to late-night ruminations by eccentrics. For example, Oxford philosopher Nick Bostrom posed the paper clip thought experiment as a way of illustrating why he believes in the need to plan ways to safeguard against super intelligence that emerges from machines. The thought experiment goes as follows: When humans give an AI system a goal to reach, we do not have full control over how that AI system will reach that goal.