Erhan

73%
Flag icon
almost all goals can be better accomplished with more resources, so we should expect a superintelligence to want resources almost regardless of what ultimate goal it has. Giving a superintelligence a single open-ended goal with no constraints can therefore be dangerous:
Life 3.0: Being Human in the Age of Artificial Intelligence
Rate this book
Clear rating