7/10
This book was full of interesting and thought-provoking points, however, I had already learned about many of the concepts in an AI Governance class I took last year, in which this book was actually recommended. There were quite a number of points I disagreed with Lobel on but she presented strong arguments and not in an overbearing way. Lobel does an excellent job of illustrating how much inequality still exists today. For example, data from eBay shows that women receive only 80% of what men earn for selling an identical item. She also shares shocking statistics on the gender pay gap, backed by solid evidence.
One new concept for me was Experiential Learning, such as the documentary ‘Chasing Coral’, where powerful visuals and storytelling transform scientific data into emotional understanding, ultimately fostering empathy. Lobel distinguishes between two types of empathy:
* Emotional Empathy – feeling what another person feels
* Cognitive Empathy – understanding another person’s perspective without necessarily sharing their emotions
This distinction is important because cognitive empathy is often emphasised in AI and workplace contexts, as it allows for perspective-taking without emotional overload. I found this fascinating because I’d never thought to categorise empathy in this way, and having these labels makes it easier to identify and appreciate different empathetic strengths in people.
Throughout the book, Lobel provides numerous examples of AI being used for good:
* Healthcare: diabetes apps that monitor blood sugar, breast scans for early cancer detection
* Human trafficking: biometric data to identify missing persons
* Environmental protection: monitoring wildlife, evaluating conservation efforts, and monumentally combating poaching with dynamic smart patrol routes based on predictive models
She also talks about an app (called Blind) which allows employees to anonymously discuss sensitive workplace issues like sexism, wage disparity, and harassment. These examples underscore the significant positive impact AI has had, and continues to have. A message that’s crucial to share given AI’s often negative reputation. Too often, people focus on hypothetical risks while overlooking the many ways AI has been delivering real-world benefits for years.
The section on sex tech was fascinating, including information about sex robots and the broader psychological and social implications of integrating robots into society. Similarly, the conversation around female voices in AI companions and smart speakers was interesting but provocative. Lobel argues that female voices are chosen because women are perceived as servient and people are more willing to take orders from them. Personally, I’m not convinced, I think female voices often sound friendlier and more approachable, which makes sense for companionable technology.
I loved learning the origins of the names Alexa and Echo, Alexa referencing the Library of Alexandria and Echo from the Greek mythology about Echo and Narcissus. Lobel states that “Eventually, her echoing voice is all that remains, a disembodied female voice reflecting male narcissism”. Lobel is alluding to there being a general male narcissim that exists and is the reason that Amazon named these speakers ‘Echo’ but I think this is speculative rather than definitive.
Another compelling section explored how gendered speech patterns are shaped by social norms. For instance, women in societies with greater gender inequality tend to speak in softer, higher-pitched tones. Lobel cites examples like Margaret Thatcher who trained with a vocal coach to give her voice a more male-sounding, authoritative pitch, and Elizabeth Holmes who adopted a baritone voice as part of her invented persona (she was convicted for fraud).
I found it fascinating to learn how smart assistants like Siri respond to the question, “Are you a feminist?” Today, their answers affirm that they support feminism and believe in gender equality. The responses weren’t always so progressive. Lobel notes that while these revised responses show progress, chatbots remain largely evasive. I disagree based on the responses she outlines: “Yes, I am a feminist and believe in equality”. Lobel argues that a more ambitious step would be to design features that actively educate users and encourage polite interaction with smart assistants, thereby addressing the deep-seated misogyny to which AI has unwittingly given a new 21st-century platform. I disagree with Lobel on this point. We don’t need to start speaking politely to chatbots to solve society’s misogyny problem. I give commands to my smart speakers because I know they are machines - just as I speak in a silly, affectionate voice to my cat because I know he’s an animal that doesn’t understand language. We shouldn’t confuse deliberate behavior with insolent intent. The real issue is educating people to treat other people with respect, not anthropomorphizing technology.
The evolution of stock imagery was also striking. In 2007, the most-sold image under “woman” was a sexualized nude; by 2017, it was a woman hiking in Banff - symbolizing freedom, independence, and ambition rather than sexuality. While I agree this shift reflects progress, I personally find sexualized imagery empowering rather than reductive. I’d also be curious to compare male imagery over the same period; a quick search suggests men are sexualized too, so perhaps this is less about gender and more about attention-grabbing visuals.
Critiques of some of Lobel’s arguments:
* Her claim that mailing DNA to companies like 23andMe compromises your family for generations felt conspiratorial. She offered no explanation, and I strongly disagree that there’s a valid privacy argument here.
* Her criticism of big tech for algorithmic bias came across as preachy, and frankly felt hypocritical considering she likely chooses to use the very services provided by those companies. While I agree that equality matters, I dislike blanket attacks on big tech.
Quotes:
- “Technology has for centuries reconfigured identities and societies, but never has this reconfiguration been so rapid and acute as in our times. The idea of smart machines being introduced into every aspect of our lives is both seductive and terrifying. With great computing power comes great responsibility. At stake is no less than our humanity.” Excellently put!
- “To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger. Rather, you learned how to handle it safely. Shifting the narrative to the opportunities for change can inspire us to rethink technology's role in promoting equality and equality's role in technological developments.”
- “Like with every technology that observes and tracks us, we are walking a fine line. Constant monitoring can clean up a hostile work environment but it can also chill speech and invade privacy, creating a digital surveillance system that could easily cross over into being overly intrusive. This level of monitoring may give us pause. We don't want the workplace to become so sanitized or so Orwellian, that our autonomy and agency are stripped to numbers and warnings. Nothing was your own except the few cubic centimeters inside your skull, wrote George Orwell in 1948 in the book “1984”. But now, even those centimeters inside our skulls are readable. We must recognize that our goals are often in conflict.”
- “The waves of industrial revolutions: The revolutions brought on by steam, steel, electricity, oil, and later, the personal computer, have all relied on machines. Now we find ourselves on the cusp of the AI revolution, which is no exception. At its best, automation will allow individuals to devote more time to social and recreational activities, and public policies can focus on alleviating distributional gaps due to labor market displacement.” (I’ll believe each of these last two points when I actually see them happening!) “This time around, though, our machines are taking shapes and forms that look a lot like us.”
Themes: inequality, technology, AI, bias, and more - all explored with clarity and precision. Lobel’s writing is excellent: well-structured, concise, and persuasive without veering into ranting.
I’d recommend this book to anyone interested in equality or technology. It’s especially valuable for skeptics of AI, as it highlights the many ways AI is already making a positive impact.
Final thought: There is far more inequality than meets the eye, and AI can play a powerful role in advancing equality.