A recent book by Kaitlin Ugolik Phillips caught my attention because I’ve spent considerable amount of reflection in the last couple of years on precisely what the title touts, The Future of Feeling: Building Empathy in a Tech-Obsessed World. I even wrote a theological essay comparing modern surveillance capitalism (and governance) to the idea of divine omnipresence and omnipotence (with obvious qualifying aspects). So, I was very grateful to find this volume, although I read the book rather than listening to the audio book pictured here. So, what is this empathy in the title and how does it apply to technology. She quotes Futurist Jane McGonigal: “Empathy requires you to use your imagination in the same way It requires you to get your brain to simulate something you have no personal, concrete experience with.” (p. 58)
As one would expect, this book contains many qualitative stories, but what I truly appreciated was the significant amount of quantitative research cited and the introduction to two tools to combat toxic discussions (aka “use an algorithm to identify trolls efficiently). First, Faciloscope has an online site where one can paste in conversations from the web and measure the structure of a conversation (p. 36). One receives a chart and a phrase-by-phrase breakdown as to why it would be: 1) staging (setting our the ground rules for a conversation); 2) evocation (identifying possible relationships between the participants; and 3) invitation (direct solicitation of participation through questions and requests). Second, Google has some experimental code available to the user (once they have established a Google Cloud project) that one can use as an API on their website to help moderators identify potentially toxic conversations before the sparks burst into flame wars (p. 37). The latter uses “machine learning” to develop the criteria, though, and the author discovered that a sentence identified as toxic at one period may turn out to be softened with later input (p. 38). [Note: Although not specifically mentioned in the book, following up on these two tools led me to other projects working on the same problem.]
Another useful reference was to the Face the Future project, a website full of media and games to allow students to consider the implications of the future. One of the games to which The Future of Feeling: Building Empathy in a Tech-Obsessed World pointed was called FeelThat, essentially a FitBit for emotions (pp. 41-43). Although the product doesn’t exist, the videos used with the project imagine that one was wired directly into another person’s emotions in good experiences and even in a death experience. Another experiment of which I was unaware was that of Robyn Janz and the GALA (Girls Academic Leadership Project) using VR. Janz shares: “Here’s how I look at it: VR empathy is a fantastic experience built on irony. You go through something that is anywhere from completely joyous and euphoric to absolutely catastrophic and cataclysmic. How you come out of immersion is more to do with your subjective experience in life than anything else.” (p. 55) I also had not heard of Peggy Weil’s Gone Gitmo experience for Second Life (p. 84).
In covering computer games for so many years, I’ve heard a lot of talk about gamification. In this book, I read about Pymetrics and neuro-science-based games used by companies such as J. P. Morgan and Hyatt Hotels in the hiring process (p. 111). While not a game, I was fascinated by a medical simulation called SymPulse that gives electrical jolts to doctors and students so that they can empathize with Parkinson’s patients (p. 122). Also proving interesting to me was the Oculus Rift study of 2017 demonstrating that patients experiencing a VR experience of being on a beach had less stress and positive memories of a visit to the dentist (p. 129). But what I’d really like to get my hands on is “…the Thync, a triangular device created by neuroscientists at the Massachusetts Institute of Technology that is, according to marketing materials, “the first consumer health solution for lowering stress and anxiety.” The device electrically stimulates the nerves in the user’s face and neck that have been found to help regulate stress hormones. It’s touted as the lower-tech, lower-risk version of brain stimulation, and some research shows it can help treat epilepsy, anxiety, and depression.” (p. 186)
I loved the recounting of apparent successes in using technology to enhance rather than suppress empathy, but Philips doesn’t ignore the ugly, surveillance capitalism looming beneath the surface. For example, “many schools have started using internet filters and type trackers to detect when students’ search behaviors suggest depression or suicidal ideation; the education company Pearson conducted a ‘social-psychological’ experiment on thousands of unaware math and science students using its software, to see whether encouraging messages helped them solve problems; and one startup company, BrainCo, is raising millions in funding to create electronic headbands that will allow it to analyze students’ brain data.” (p. 56) And, while companies facilitating such experiments are given the right to collect data to “improve their product,” what does that really mean? Philips asks practical questions about the duration of keeping data and whom the ultimate keepers of the data would be.
The chapter on virtual reality suggested that VR experiences could lead to more empathy, but it really depended upon the way one thinks about change prior to the experience (p. 67). Philips cited a 2018 report from the Tow Center for Digital Journalism: “prompted a higher empathetic response than static photo/text treatments and a higher likelihood of participants to take ‘political or social action’ after viewing.” (p. 67) She also gave an account of a VR experience called I am Robot which put participants in the role of gender-neutral robots. Most experienced a lowered inhibition level that allowed individuals of both sexes to feel free enough to dance in the experience, even though their pre-experience statements indicated that would not dance (p. 75). But there is a cautionary word: “If VR experiences can trigger empathy in viewers, they can also trigger other feelings: stress, distress, overwhelm, exhaustion, anxiety, and, in cases where people with preexisting trauma may not have been adequately prepared, even symptoms of PTSD.” (p. 76)
I was glad to see Ms. Philips spend some time discussing how journalistic VR (or immersive experiences) can manipulate empathy. Her big question is how one can be transparent when taking some liberties from pure objectivity (p. 89). How can this be done when the goal is putting the consumer of the “story” inside the story? (p. 89) She gives specific examples from the world of journalism, then observes: “The threats of manipulation and subjectivity are more easily dismissed when it comes to advertising and nonprofit outreach. But in journalism it’s important to evoke empathy without coming across as if you’re forcefully extracting it.” (p. 90). And again, “’The invisibility of the journalist in VR can be a dangerous illusion in the consumption of media when viewers begin to analyze, relate to, and act on the stories they consume.’” (p. 91)
I was intrigued by the following criticism of such efforts: “Social-justice activists have termed this phenomenon trauma porn, a spectacle that makes viewers feel good about themselves while giving no benefit—and at times perpetuating harm—to the individuals and communities being depicted.” (p. 92) That’s an interesting observation with some validity, but it could be leveled at almost any media—including books and ordinary documentaries. Indeed, later Philips admits that in contrast to standard documentaries, “People don’t tend to come away from these pieces remembering facts and figures—they remember feelings. But those feelings can lead them to new perspectives, which is a sign of good journalism.” (p. 95)
The next chapter moves back to the idea of empathy in general. It speaks of harassment and empathy training. Why is this important? “Leaders rated as having high empathy by direct reports are 2.5 times more likely to set clear performance expectations, hold others accountable for maintaining high performance, and address performance issues in a fair and consistent manner, according to data from a DDI meta-analysis.” (p. 102) But it was in the discussion of medical empathy that I found this jewel, in turn quoted from Leslie Jamison: “’ Empathy isn’t just listening, it’s asking the questions whose answers need to be listened to. Empathy requires inquiry as much as imagination. Empathy requires knowing you know nothing.’” (pp. 119-120)
I was thrilled to discover about VR programs that reduced pain and stress experiences in patients where blood is withdrawn regularly and bandages on burn victims are frequently changed. Still, I worry about experiments with AI such as “Ellie,” an AI therapist currently being used in a pilot program with the U.S. military. The AI therapist “admits” initially in the session that “she” is not a therapist, but most soldiers feel safe talking to her. Right now, their data is private, but what happens when it is turned over to the military? (p. 142). Empathy toward AI or robots is a strange thing. I was fascinated by Philips’ account of a University of Washington study in 2012 showed that 98% of children thought it was wrong to shut a human being in a closet and 100% were okay with placing a broom in a closet, but only 54% felt it was permissible to shut a robot named Robovie in a closet (p. 144). She also cited a German study from 2018 which discovered: “when eighty-five people were given the choice to switch off a robot they had either just been chatting with or just been using in some functional way. Some of the robots also said out loud that they did not want to be turned off. The results showed that people preferred to keep the bot on when it protested. The participants seemed to be less stressed about turning off the bots that had only been helping them do a task, which wasn’t too surprising. But participants had the hardest time turning off robots that both were functional and objected to being turned off.” (p. 145)
Considering robots, I’ve also been forced to think about what Caleb Chung, co-inventor of the Furby, had to say. In a 2007 TED Talk, Pleo’s creator, Chung, said that he believed “humans need to feel empathy toward things in order to be more human,” and he thought he could help with that by making robotic creatures. As he told the producers of the radio show Radiolab for an episode titled “More or Less Human,” he’d made Pleo in a way that he thought would evoke empathy—giving it the capacity to respond to unwanted touch or movements by limping, trembling, whimpering, and even showing distrust for a while after such an incident. “Whether it’s alive or not, that’s exhibiting sociopathic behavior,” he said, referring to the way the tech bloggers attacked Pleo (p. 148).
At this juncture, Philips reminds the reader of an underlying theme in speaking of AI. She refers to a TED Talk by Danielle Krettek of Google’s Empathy Lab and summarizes it as: “…the things we worry about and fear when it comes to AI—that robots will destroy us all, or at least take our jobs—tell us a lot about how we feel, and what we fear, about humanity itself.” (p. 161) Also, more than once I’ve heard the story of the CIA testing real-time face-recognition technology and discovering that it didn’t pick up African-American faces. For example, on a picture of the original Star Trek bridge crew, it totally ignored Lt. Uhura (p. 171). I particularly liked the call for an algorithmic accountability group (p. 175).
The Future of Feeling: Building Empathy in a Tech-Obsessed World does exactly what a non-fiction book of its kind is supposed to do. It both frightens me and offers hope. Of course, only the future will determine which will dominate, horror or hope. In the meantime, Philips offers plenty of food for thought and discussion.