Carljoe Javier

74%
Flag icon
Should one person or group get to decide the goals adopted by a future superintelligence, even though there’s a vast difference between the goals of Adolf Hitler, Pope Francis and Carl Sagan? Or do there exist some sort of consensus goals that form a good compromise for humanity as a whole?
Life 3.0: Being Human in the Age of Artificial Intelligence
Rate this book
Clear rating
Open Preview