More on this book
Community
Kindle Notes & Highlights
She was interested, she said, in how we learn to be good. She wanted to know whether we can train machines to be good in the same ways we train ourselves.
When it comes to the workings of AI in the real world, the ethicist herself cannot escape the prison house of culpability.
I come from unanalyzed, incurious people,
We are all statistics and individuals at the same time.
Artificial Intelligence confronts us with the problem of distributed culpability. Human morality, historically, centers around agency and intentionality. We blame the drunk driver, not the car; we credit the artist, not the brush. AI systems muddy these waters. AIs are not mere tools; their learning algorithms endow them with agency. They make “decisions” based on data, albeit without consciousness or intent. A strict division between human and machine culpability is quickly becoming untenable, creating a landscape where ethical norms strain under unfamiliar weights.
A secret can be more wounding than a lie.
With AI, anyone who pretends we can know the future good of a present-day investment in that sector is a fool. Moral outcomes are always uncertain, no matter how much you dress up your investment with a benevolent halo.
Sheltered by our money and our first world comforts, we will always ignore the suffering of other, more remote people in the face of our own children’s suffering. We implicitly elect that others die rather than that our own child experience injury, or even mild discomfort.
Is a machine responsible for everything it does? Of course not. It is we who are responsible for the consequences of the very freedom we grant to these objects of our creation.
In granting autonomy to an algorithm, we are not condemning the machine to be free. Rather, we are condemning ourselves.
People of a certain demographic tend to get off scot-free. Calls get made, hands get shaken, backs get stabbed.
Algorithms face no such consequences for their misbehavior, either societal or emotional. Punishment, guilt, culpability are alien to them. There are no moral qualms in an algorithm.
shouldn’t make these machines because we want them to be good for us, or good instead of us. We should make them because they can help us be better ourselves.”
We have no idea what’s really going on with AI, how it’s changing everything, threatening us in so many ways that we don’t see or understand.
We do the world no good when we throw up our hands and surrender to the moral frameworks of algorithms. AIs are not aliens from another world. They are things of our all-too-human creation. We in turn are their Pygmalions, responsible for their design, their function, and yes, even their beauty.

