More on this book
Community
Kindle Notes & Highlights
Read between
January 12 - March 5, 2020
A common analogy for empathizing is “putting yourself in someone else’s shoes.” This always makes me think of putting my tiny feet into my dad’s big loafers as a kid and clomping around the house. “Look, I’m Daddy!” I’d say. But my dad obviously didn’t clomp around in shoes ten sizes too big for him and demand attention for it.
“If there is one emotional intelligence skill that we would recommend developing, it’s definitely empathy,” Ilona Jerabek, president of PsychTests, said at the time. “Empathetic people are happier, more self-aware, self-motivated, and optimistic. They cope better with stress, assert themselves when it is required, and are comfortable expressing their feelings. There was only one scale where non-empathetic people scored higher: Need for Approval.”
“Philosophers say that our capacity to put ourselves in the place of the other is essential to being human,” she writes. “Perhaps when people lose this ability, robots seem appropriate company because they share this incapacity.”
the concept of “vaguebooking.” You may have seen it—someone makes a passive-aggressive or vaguely concerning post on Facebook or another social-media platform without mentioning any names or specific details, leaving commenters to wonder what’s up. Some people are so rude, for example, or a photo of a hospital bracelet with no context. The researchers found that young people who made posts like this tended to be lonelier and in some cases even more prone to suicidal ideation.
How could he not see the real issue here? I had wondered. It wasn’t until much later that I realized he probably wondered the same thing about me.
person seems angry or upset or shows any emotion. These are extreme examples of conversation gamification, but it’s easy to see in just about every online comment thread that there are numerous people—trolls or not—who start or join conversations with the sole intent of winning.
In other experiments, the researchers found that people who had a conversation in a dark room were better at guessing each other’s mental state, and people who listened to others talk in a dark room were better at it than those who could see the people they were listening to.
Studies show that teaching traits like kindness, compassion, and empathy, in an explicit and intentional way at a young age, can make a difference. A 2011 meta-analysis of social-emotional learning, which many US curricula have embraced in recent decades, suggested that it led to higher graduation rates and safer sex, even eighteen years later.
in a real game of hide-and-seek, players can have the formative experience of going too far. Everyone of a certain age has accidentally hurt someone’s feelings or hit or kicked another kid during a game like this, and when that happened, they could see it on the other person’s face. They were in such close proximity, they couldn’t avoid having a reaction. While a sense of community might exist in a digital space, that kind of accountability often doesn’t.
I took a spin on the Tinybop app Homes, which allows kids to explore illustrated versions of different kinds of living spaces, from a typical American house to a Mongolian ger. I decided to start with a visit to an animated Guatemalan home.
Sara Konrath, the University of Michigan researcher who released the 2010 headline-grabbing analysis about declining empathy among college students, has created her own empathy-teaching app for kids, called Random App of Kindness. Konrath, whose previous research suggested a connection between smartphones and empathy decline, has said she still believes phones can present an opportunity to teach social skills.
Daffy and his team decided to make some changes, and they debuted a new version of the game, called I Am Robot, the next year. This one allowed groups of people to don headsets and become genderless robots attending either a ballet recital, a cocktail gathering, or a dance party. The response from participants was surprising. Men in suits who swore they wouldn’t dance became entirely different people in the genderless VR world; a woman with social anxiety who had struggled to enjoy herself at the conference put the headset on and, inhibitions gone, danced and laughed for the first time in
...more
“The very concept of empathy creation through VR is an [o]thering process,” journalist Inkoo Kang wrote in Slate in 2017. “So-and-so’s experiences are so vastly different from yours, it’s presumed, that you can only understand their situation if you step into their shoes.”
Empathy is getting so big in the business world that there are at least two massive indices—one created by the Harvard Business Review and one made by UK company The Empathy Business—that rank companies on their empathy. Facebook, Google, LinkedIn, and Netflix regularly top the lists.
If those companies are topping the lists, I don't think they're measuring the right thing.
Attendance is not the same as compliance.
He points to one thing many of the consistently rated “most empathetic” companies have in common: they have a ton of amenities that ultimately send the message to employees that they should never leave the office.
A 2011 review of existing research on self-reported empathy among medical students and residents found that empathy levels declined during medical school and residency.
The veterans who speak to Ellie seem to like her both because she acts like a human, expressing concern, interest, and empathy, and also because she is not a human. If that seems contradictory, it’s a good depiction of how unsure many of us still are about how to feel about bots.
he’d made Pleo in a way that he thought would evoke empathy—giving it the capacity to respond to unwanted touch or movements by limping, trembling, whimpering, and even showing distrust for a while after such an incident. “Whether it’s alive or not, that’s exhibiting sociopathic behavior,” he said, referring to the way the tech bloggers attacked Pleo. But it also made him wonder whether his decision to imbue the bot with such lifelike features had somehow invited this kind of treatment.
Muse might suggest, for example, that a mom should have a paper-airplane contest with her son one evening to practice emotional development. Let him win a few times, Muse would say, but not every time—he needs to learn how to stay motivated, but also that he can’t always get what he wants. This was a suggestion Muse gave Ming to use with her own son. It also once told her to learn, with her daughter, how a toilet works. They both got so into it that they took the toilet apart and had to learn how to put it back together.
We did not yet have the Americans with Disabilities Act. But disabled people existed and lived in New York City and wanted to get around, and that was knowable. Excluding people with disabilities might not have been intentional, but including them could have been, and it wasn’t. The result was—and remains—that a lot of people could not use the subway. What if there had been people with disabilities at the table where design decisions were originally made? This is the same question a lot of up-and-coming technology developers are grappling with.
“It was amazing,” she told me over the phone, still sounding excited about the breakthrough two decades later. “It could track George Washington on the dollar bill. You could show it a picture of a hundred people and it would find all of the faces. Then we showed it the bridge crew of the [Star Trek] starship USS Enterprise, and there was one person missing: Uhura.”
When I asked her about addressing the apparent unintended consequences of a homogenous group of people creating algorithms to run our social lives—consequences like racist AI and widespread trolling and harassment—she stopped me and proposed a thought experiment: What if the consequences weren’t exactly unintended?
For her part, O’Neil is all for empathy, but she doesn’t expect change to come through appeals. She’d rather sue or attack the problem through academia. In 2017 she proposed the creation of an institute aimed at researching algorithmic bias and ways to prevent it in the future. An algorithmic accountability institute would, she wrote, provide ethical training for future engineers and data scientists, offer workshops to highlight the connections between AI and other industries (law, journalism, etc.) and create standards for “human experimentation in the age of big data.”
Phones have modes, but we have moods, she said, admitting that the tech industry thus far has not done a good job of acknowledging the full humanness of humans.
“Designing for social interaction requires so much more of us to show up,” she said. “Bringing all of who you are, all of yourself to the table, to the challenge, to the messy uncomfortable struggle of what it is to be human—you don’t get to skip that this time. The next ten years are going to be very different.”
If I really were Jeff Bezos or Mark Zuckerberg, if I had their demographics and backgrounds and experiences and privileges, I might not ever even think to worry about the impact AI could have on my résumé or rap sheet. I’d have concerns, sure, but they would be different. And if I hired mostly people with similar experiences and concerns, would I think twice about the echo chamber this created?
Center for Humane Technology, aimed at helping people use technology more intentionally and recognize warning signs of compulsive behavior and burnout. The organization encourages the developers of new and existing technology to have more empathy for their users, focusing on the quality of time spent with their products rather than the quantity.
I confessed to him that I had enlisted my husband to change my Facebook password and keep it from me at several points during the process of writing this book. I just couldn’t stop opening the app and scrolling mindlessly when I got stuck or needed a distraction,
In one installment, she quoted producer Loira Limbal on the current fatal flaw of VR: “If people don’t see communities that are typically thought of as ‘other’ as human, I don’t think that a VR piece is going to humanize them.” It’s a false premise, Limbal argued, that relies on a trope of putting a “human face” on someone who is considered “other,” i.e., not the cultural norm of white, male, straight, cis, and able-bodied.
the more I’ve spoken to and read the work of emerging leaders like Kamal Sinclair, Cathy O’Neil, Michèle Stephenson, Ami Shah, Kunal Gupta, and Robyn Janz—to name just a few—the less I feel the need to focus on the Silicon Valley giants. These other creators, thinkers, and activists might be called the builders of the second wave of the tech revolution. They are more diverse, more skeptical, more transparent, and, yes, more empathetic, in their work and their communications. My conversations with these people gave me hope that we really will have a feeling future.