More on this book
Community
Kindle Notes & Highlights
Read between
November 10 - November 10, 2017
Individually, it’s easy to write each of these off as a simple slipup—a misstep, an oversight, a shame. We all make mistakes, right? But when we start looking at them together, a clear pattern emerges: an industry that is willing to invest plenty of resources in chasing “delight” and “disruption,” but that hasn’t stopped to think about who’s being served by its products, and who’s being left behind, alienated, or insulted.
Because tech has spent too long making too many people feel like they’re not important enough to design for. But, as we’ll see, there’s nothing wrong with you. There’s something wrong with tech.
Designers and technologists don’t head into the office planning to launch a racist photo filter or build a sexist assumption into a database. So how do these alienating, unethical, and downright offensive decisions unfold—over and over again?
“I felt like I was in an episode of Mad Men,” she told me. Over and over, her ideas were discounted, and her expertise ignored. And as a result, the audience’s actual needs—the ones identified and confirmed through her painstaking research—were discarded. “That’s a specific project, a physical piece of technology, that would exist in the world or not based on whether these men in the room accepted what I had to say or not,” she said. “They just weren’t willing to accept the research and use it as a foundation.” The project got shelved, and the brand partnered with a celebrity to design a
...more
Scratch the surface at all kinds of companies—from Silicon Valley’s “unicorns” (startups with valuations of more than a billion dollars) to tech firms in cities around the world—and you’ll find a culture that routinely excludes anyone who’s not young, white, and male.
One designer working on digital products in the Midwest told me she sat down with her company’s owners to talk about maternity leave and found out they didn’t even know whether they had a maternity policy. Even though the company had forty-odd employees and had been in business more than a decade, no staff member had ever been pregnant.
“We have three other women of childbearing age on our team, and we don’t want to set a precedent,” the owner told her, as if pregnancy were some sort of new trend.
Racism is rampant too. Take the story of product designer Amélie Lamont, whose manager once claimed she hadn’t seen her in a meeting. “You’re so black, you blend into the chair,” she told her.
One woman told me she was pressured to drink so much at her welcome party—the Friday before her start date—that she spent the weekend before her new job even began recovering from mild alcohol poisoning.
And what all these stories indicate to me is that, despite tech companies talking more and more about diversity, far too much of the industry doesn’t ultimately care that its practices are making smart people feel uncomfortable, embarrassed, unsafe, or excluded.
This is an industry that can look around at a bunch of young white men who plank together in the mornings and get drunk together in the evenings and think, This is great. This is what a healthy workplace looks like. If tech culture doesn’t notice how its culture excludes others—if it can’t even bother to listen to a woman in a meeting—why would it notice when its products do the same?
Other women told her they’d left the event swimming in job offers to choose from. But looking back, Thomas realized that those women all had something in common: they were white. She is black.
But what Thomas experienced convinced her that it’s not really about the pipeline. The tech industry just isn’t looking for people of color—even when those candidates are right in front of them, like she was at Grace Hopper.
The numbers back her up. In a 2014 analysis, USA Today concluded that “top universities turn out black and Hispanic computer science and computer engineering graduates at twice the rate that leading technology companies hire them.”
That’s why the pipeline is such a myth. Regardless of how many women and underrepresented minorities study computer science, the industry will never be as diverse as the audience it’s seeking to serve—a.k.a., all of us—if tech won’t create an environment where a wider range of people feel supported, welcomed, and able to thrive.
These settings are powerful, and not just because we might not notice that a checkbox is already selected (though you can bet marketers are relying on that). Defaults also affect how we perceive our choices, making us more likely to choose whatever is presented as default, and less likely to switch to something else. This is known as the default effect.
when New York City cabs implemented touchscreens in every vehicle. The screens defaulted to show your fare and then a few options to automatically add the tip to your total: 20 percent, 25 percent, or 30 percent. Average tips went from 10 percent to 22 percent, because the majority of riders—70 percent—opted to select one of the default options, rather than doing their own calculation.
Default settings can be helpful or deceptive, thoughtful or frustrating. But they’re never neutral. They’re designed.
Female characters, on the other hand, were included as default options only 15 percent of the time. When female characters were available for purchase, they cost an average of $7.53—nearly twenty-nine times the average cost of the original app download.
But when default settings present one group as standard and another as “special”—such as men portrayed as more normal than women, or white people as more normal than people of color—the people who are already marginalized end up having the most difficult time finding technology that works for them.
Try to bring up all the people design teams are leaving out—whether its gay people buying gifts for loved ones or women who want to play games—and many in tech will reply, “That’s just an edge case! We can’t cater to everyone!”
But when applied to people and their identities, rather than to a product’s features, the term “edge case” is problematic—because it assumes there’s such a thing as an “average” user in the first place.
So, what did the air force do? Instead of designing for the middle, it demanded that airplane manufacturers design for the extremes instead—mandating planes that fit both those at the smallest and the largest sizes along each dimension. Pretty soon, engineers found solutions to designing for these ranges, including adjustable seats, foot pedals, and helmet straps—the kinds of inexpensive features we now take for granted.
When designers call someone an edge case, they imply that they’re not important enough to care about—that they’re outside the bounds of concern. In contrast, a stress case shows designers how strong their work is—and where it breaks down.
But what they missed—because, I recognize now, our personas encouraged them to miss it—was that demographics weren’t the point. Differing motivations and challenges were the real drivers behind what these people wanted and how they interacted with the organization.
We thought adding photos, genders, ages, and hometowns would give our personas a more realistic feel. And they did—just not the way we intended. Rather than helping folks connect with these people, the personas encouraged the team to assume that demographic information drove motivations—that, say, young women tended to be highly engaged, so they should produce content targeted at young women.
averages don’t mean nearly as much as we’re led to believe. The only thing that’s normal is diversity.
Instead, they need to understand that “normal people” include a lot more nuance—and a much wider range of backgrounds—than their narrow perceptions would suggest.
I saw race and ethnicity menus that couldn’t accommodate people of multiple races. I saw simple sign-up forms that demanded to know users’ gender, and then offered only male and female options. I saw college application forms that assumed an applicant’s parents lived together at a single address.
Forms inherently put us in a vulnerable position, because each request for information forces us to define ourselves: I am this, I am not that. And they force us to reveal ourselves: This happened to me.
The removal of American Indians from Facebook is part of a larger history of removing, excluding, and exiling American Indians from public life, and public space. . . . This policy supports a narrative that masks centuries of occupation and erasure of Native culture.
Plus, names are just plain weird. They reflect an endless variety of cultures, traditions, experiences, and identities. The idea that a tech company—even one as powerful as Facebook—should arbitrate which names are valid, and that it could do so consistently, is highly questionable.
Then there’s the problem of binaries. Most forms still use two options for gender: male or female. But that label doesn’t work for a lot of people—more than you might think, if you don’t happen to know anyone who is trans or nonbinary. According to a 2016 report from the Williams Institute at the UCLA School of Law, which analyzed both federal and state-level data from 2014, about 1.4 million American adults now identify as transgender—around 0.6 percent of the population.
Who the hell needs your title in the first place? No one asks me if I’m married when I buy a sweater at the mall. But as soon as I head online, it seems like everyone needs to know whether I want things shipped to Miss or Mrs. The post office doesn’t require this.
This is why I think of noninclusive forms as parallel to microaggressions: the daily little snubs and slights that marginalized groups face in the world—like when strangers reach out and touch a black woman’s hair.
When systems don’t allow users to express their identities, companies end up with data that doesn’t reflect the reality of their users.
In another example, a woman tried to email an airline a copy of her mother’s death certificate. When she typed the word “death,” the operating system, Apple iOS, kept suggesting that she add a cute little skull emoji to her message.
One day in September 2016, Sally Rooney felt her phone buzz. She looked at the screen and saw a notification from Tumblr: “Beep beep! #neo-nazis is here!” it read.
When the design and product team at Medium saw Kevin’s screenshot, they cringed too—and immediately went through their copy strings, removing the ones that might feel insensitive or inappropriate in some contexts. Because, it turns out, one of the key components of having a great personality is knowing when to express it, and when to hold back. That’s a skill most humans learn as they grow up and navigate social situations—but, sadly, seem to forget as soon as they’re tasked with making a dumb machine “sound human.”
And what she told us surprised me: MailChimp was, slowly but surely, pulling back from its punchy, jokey voice. When I asked what had gone wrong, Kiefer Lee told me there were too many things to count.
No thanks, I hate saving money. No thanks, this deal is just too good for me. I’m not interested in being awesome. I’d rather stay uninformed. Ew, right? Do you want to do business with a company that talks to you this way? I guess you could say these blamey, shamey messages are more “human” than a simple yes/no—but only if the human you’re imagining is the jerkiest jerk you ever dated, the kind of person who was happy to hurt your feelings and kill your self-esteem to get their way. That’s why I started calling these opt-in offers “marketing negging.”
BLINDED BY DELIGHT
But time and again, the conversation turned away from how to make the program useful, and toward that word I find so empty: “delight.”
Designers, like radiologists, pride themselves on attention to detail: they obsess over typographic elements like line kerning and spacing; they spend hours debating between nearly identical colors; they notice immediately when an object is a few pixels off center. Of course they do; those are the sorts of things they’ve been trained to notice. They’d be laughed out of design school and off the internet otherwise. But if they’ve never been asked to notice how their work might fail people—or been made rudely aware of the problem after it has—they’re just as blind as the rest of us.
But of course, Twitter doesn’t really care about celebrating your special day. It cares about gathering your birth date, so that your user profile is more valuable to advertisers.
Facebook has an internal metric that it uses alongside the typical DAUs (daily active users) and MAUs (monthly active users). It calls this metric CAUs, for “cares about us.” CAUs gauge precisely what they sound like: how much users believe that Facebook cares about them. Tech-industry insider publication The Information reported in early 2016 that nudging CAUs upward had become an obsession for Facebook leadership.
When Savage says the “beep beep!” message “performs,” he means that the notification gets a lot of people to open up Tumblr—a boon for a company invested in DAUs and MAUs. And for most tech companies, that’s all that matters. Questions like, “is it ethical?” or “is it appropriate?” simply aren’t part of the equation, because ROI always wins out.
Back in 2010, technologist Tim Jones, then of the Electronic Frontier Foundation, wrote that he had asked his Twitter followers to help him come up with a name for this method of using “deliberately confusing jargon and user-interfaces” to “trick your users into sharing more info about themselves than they really want to.” 8 One of the most popular suggestions was Zuckering.
Meanwhile, back at the companies behind those technologies, a world of mostly white men has wholeheartedly embraced the idea that they’re truly smarter than the rest of us. That they do know best. That they deserve to make choices on our behalf. That there’s nothing wrong with watching us in “god view,” because they are, in fact, gods. And there’s no one around to tell them otherwise, because anyone who’s different either shuts up or gets pushed out.
And according to a 2016 investigation by ProPublica, their results are typical: only about six out of ten defendants who COMPAS predicts will commit a future crime actually go on to do so. That figure is roughly the same across races. But the way the software is wrong is telling: Black defendants are twice as likely to be falsely flagged as high-risk. And white defendants are nearly as likely to be falsely flagged as low-risk.

