More on this book
Community
Kindle Notes & Highlights
And this, I propose, is the inhuman soul of the algorithm. It may think for us, it may work for us, it may organize our lives for us. But the algorithm will never bleed for us. The algorithm will never suffer for us. The algorithm will never mourn for us. In this refusal lies the essence of its moral being.
Artificial Intelligence confronts us with the problem of distributed culpability. Human morality, historically, centers around agency and intentionality. We blame the drunk driver, not the car; we credit the artist, not the brush. AI systems muddy these waters. AIs are not mere tools; their learning algorithms endow them with agency. They make “decisions” based on data, albeit without consciousness or intent. A strict division between human and machine culpability is quickly becoming untenable, creating a landscape where ethical norms strain under unfamiliar weights.
There’s this very unfortunate tendency right now in the AI space to gloss over these more pragmatic aspects of our industry and our product, to pretend that all these new systems are somehow detached from the profit motive of the corporations that developed them, the supply chains and compute capacities that power them. As if AI has an almost mystical role to play in some imagined altruistic future. But really it’s just chips and electricity and brilliant employees. It’s resources and product.

