More on this book
Community
Kindle Notes & Highlights
Read between
May 18 - May 22, 2020
Instinctive empathy involved an uncontrollable emotional reaction to someone else’s experience—crying when someone else cries, for example, or blushing with secondhand embarrassment. Intellectual empathy was more distant: recognizing someone else’s emotion but not feeling it yourself.
cognitive empathy (understanding another person’s mental state) and affective empathy (responding emotionally to the other person’s mental state—i.e., sharing their feelings).
compassion is feeling for someone; empathy is feeling with them.
can’t ever really know what it’s like to be someone else. We can only know what it’s like to imagine being someone else.
The people who scored higher in empathy also scored much higher in reading body language, conflict-resolution skills, resilience, and standing by their values.
Empathetic people are happier, more self-aware, self-motivated, and optimistic. They cope better with stress, assert themselves when it is required, and are comfortable expressing their feelings. There was only one scale where non-empathetic people scored higher: Need for Approval.” Still, not all experts are convinced empathy is worth quite the praise we tend to give it.
This reaction tends to be less common with tragedies that affect larger groups of people, a phenomenon sometimes called the “collapse of compassion.” Experts think this happens because we automatically regulate our emotional reactions when we expect them to be overwhelming.
Social technology is ostensibly about connecting people, but it doesn’t often foster the empathy that’s needed for real human connection. This problem is hard to quantify, but it’s showing up in homes, offices, and classrooms around the country.
“Philosophers say that our capacity to put ourselves in the place of the other is essential to being human,” she writes. “Perhaps when people lose this ability, robots seem appropriate company because they share this incapacity.”
phubbing, or ignoring the people around you in favor of your phone.
The fact that these men feel the need to protect young people from their creations is telling, but the truth is that most regular people don’t have that luxury.
In an effort to explain the demonstration he was about to give the audience to show off his latest project, Milk said: “What I was trying to do was to build the ultimate empathy machine.”
Film, he explained, had always been a way to encourage the viewer’s empathy for the person on the screen, and there were various ways to pull the audience further into the story using interactive elements. But all of these still required the screen to be essentially a window. “I want you through the window, on the other side, in the world,” Milk said from the stage.
But can VR really do anything that other technology can’t? There’s some evidence to suggest that the answer is yes, but also that it might depend on what kind of person
In her talk earlier in the day, Cogburn had said that it shouldn’t require the hundreds of thousands of dollars and hundreds of hours it took to create this VR project to get people to understand and care about racism. I already knew these things were happening on an intellectual level, and I’d worked to push against them in my own way. I was the target audience for this experience, and I also went into it open to empathy. What if I had been more skeptical to begin with? And what good does it do that I’m still thinking about it so long after the fact?
The project debuted to great acclaim at the Tribeca Film Festival in 2017, where attendees could experience it at room scale. It’s also still available for anyone with a smartphone on the VR app Within.
She recommends including people with disabilities in the creation of simulations and in events where nondisabled people will be experiencing them.
“When you do a simulation, you’re coming to it with your own biases, and if you don’t have someone there to help who actually knows what the experience is like, that’s where the disconnect starts happening,” she said. “It’s a delicate balance.”
Meanwhile, pernicious trends have crept in: “fake news” in the sense of misleading clickbait headlines and “fake news” in the actual propaganda sense have both been able to propagate freely, while writers and editors toil away in underfunded newsrooms or, more frequently, as underpaid freelancers in coworking spaces and on living-room couches. Seeking to maintain trust and prestige, the titans of the industry—if we can still call them that—have kept their eyes on the ever-precious prize of objectivity.
always want people to empathize, but the idea that VR is the empathy machine has been overused,” said Cassandra Herman, a documentary director and producer. “It’s become an accepted concept that in some ways suggests people don’t have to work as hard to understand things.”
The truth is that it might not matter how much journalists wring their hands about this question.
While the companies work to quantify the impacts of their devices and bring in investments to take them to scale, medical students and professionals can access similar simulation experiences through VR.
This simple solution to a complex home-health-care issue got her thinking: Why was it so hard to understand what people with brain diseases were going through? And if we could do that better, how might care be transformed?
“I think we have to remember that technology is amazing, but there are still limitations you’re not going to be able to overcome without human interaction,” she said. “Even in a hospital or a medical-school setting, the best thing you can do is talk to people.”
“The first thing you learn when you start interacting with multiple kids with autism is that autism manifests in so many different ways—the challenges of one are not the challenges of all,” Ravindran told me.
Some researchers have worried that we already expect too much from these bots: technical flawlessness, personality, adaptation to environments, behavioral changes based on our needs, and, in the medical context as well as others, empathy.
We are relying ever more on technology, from VR to AI, for connection that we once sought from caregivers, family members, and friends. Are we outsourcing empathy? And if so, is that necessarily a bad thing?
The ways humans interact with robots can be frustratingly contradictory. Anecdotal research over the past two decades has shown that some people are more prone to feeling emotionally attached to computers than others.
What has actually happened for many of us is that robots have seeped into our lives and our relationships somewhat without our notice. AI is part of the fabric of so many of the tools and services we use every day.
How many people think critically about their relationship with Alexa and whether it’s healthy from an emotional or philosophical perspective? Does my skepticism of her, and my tendency to call her “her,” mean I am failing to “apprehend the world accurately”?
“Since you have the power,” he mused, “since you have the ability to turn on and off chemicals at some level in another human, which ones do you choose?”
I thought this was an interesting choice of words. It made me wonder if AI itself could actually learn to understand anything, or if it would always just be mimicking us. And if and when that shift happens, how will we be able to tell? In the meantime, at least, it’s up to us to make sure we model the humanity we want to see in these machines.
“I get a message and a question every day, and I feel like somebody has my back as a parent,” Gupta told me. “It’s like somebody is watching out for me and sending me good ideas and suggestions. Every time I get it, I feel like I can take a deep breath somehow.”
“We cannot know what the world will be like, especially by the time they’re adults,” she said. “The best that I can do is prepare them for the great unknown and the future uncertainty that is increasing in the world and allow them to chart their own path.”
The only thing she can really do to prepare her kids for the future, she said, is to teach them how to be their best selves. And that includes focusing heavily on things like growth mind-set and social-emotional learning in daily life. If that requires a little help from a bot now and then, what’s the harm?
Replika, even if it felt to me at times more effective than an actual therapist, is mostly for fun. It’s not built to help people through mental-health crises. But others are channeling this same technology to create bots that could potentially save lives.
“I was more interested in technology that could leverage the collective intelligence and creativity of people, capture that unique ineffable arc that happens between people in therapy,” he told me. And he wanted to make a “more vibrant, more engaging and more unexpected experience than what you might get if you just gave people a manualized version of treatment online.”
“For a lot of these users, doing this over and over again created a sort of muscle memory where later in their own life they’d be struggling with a problem and they would think about how they would describe it in a more hopeful way to someone else on the platform,” Morris told me.
This is something Google and Amazon have started to do as well. If you type into the Google search bar that you want to kill yourself, the top results will be links to the National Suicide Prevention Lifeline (with the number displayed right on the results page) and similar services. If you tell Alexa you want to hurt yourself, she can do the same thing. But Morris felt there was something missing from this process.
“If you go on Google to book a flight to New Orleans, it automatically knows that’s what you’re trying to do—it’s designed to ensure you end up booking the flight,” Morris said. “If you say you want to kill yourself, there’s nothing there but a link that has no context whatsoever.”
Her task, she explained soothingly, was to “blend social sciences with the arts” and try to “connect with deep humanity that can make [Google’s] products feel like they’re meant for us.” They are, of course. And this idea isn’t really all that revolutionary in the world of marketing—what maker of expensive products wouldn’t want potential buyers to feel like owning one was just meant to be? But Krettek isn’t technically in marketing—she is, as she often puts it, on “Team Human,” and she really believes that the technology we use can be made to help fix some of the problems it has caused.
This idea echoes what a lot of other people in emerging tech told me, though there’s something about it coming from a tech giant like Google that makes me skeptical. But I knew she was right about one thing: AI isn’t human, if being human would mean having its own empathy. But it does have the capacity to reflect ours—and it will reflect our worst qualities too, if we let it.
“Cambridge Analytica came at a phenomenally good time in our tech history,” he told me. “We’ve handed over so many real-time, live, digital forms of ourselves that we’re open to abuse, and it took something like Cambridge Analytica, that kind of social pain and financial pain at Facebook, to be a wakeup call to other social networks. We needed it to actually instigate something that starts some self-regulation.”
“You need to have that pain point to be able to point to what came before,” he said.
“What went wrong?” isn’t the right question.
“Simply put, the inventors became overwhelmed by their own creations, which led to what I can only describe as casual negligence, which led to where we are now,”
Was it because he was a computer major who left college early and did not attend enough humanities courses that might have alerted him to the uglier aspects of human nature? Maybe. Or was it because he has since been steeped in the relentless positivity of Silicon Valley, where it is verboten to imagine a bad outcome? Likely. Could it be that while the goal was to “connect people,” he never anticipated that the platform also had to be responsible for those people when they misbehaved? Oh, yes. And, finally, was it that the all-numbers-go-up-and-to-the-right mentality of Facebook blinded him to
...more
Excluding women, people of color, members of the LGBTQ community, and people with disabilities from the creation process of tech platforms is in most cases probably not intentional, but including them can be, and often isn’t. The result has been that artificial-intelligence programs do oppressive things like identify black faces as gorilla faces, eliminate résumés with the word “women’s” in them, and push fake and incendiary articles to the tops of our news feeds.
“At the very least, one of my goals is for no one to ever be able to say, ‘How could we have known?’” Vivienne Ming told me when I asked about her hopes for the future of technology and empathy.
“They need to add the unspoken addendum: ‘but we won’t, because nobody that works for us is afraid their family will get labeled as gorillas, and it doesn’t strike us as a priority because we don’t actually make a lot of money that way,’” Ming said. “Now we’ve arrived at the point where we’ve realized this is not a tech problem.” It’s a people problem.