Marina Gorbis's Blog, page 1417
May 20, 2014
Wheat and Rice Cultivation Have Deep Cultural Impacts
When research subjects were asked to draw diagrams depicting their social networks, people from rice-growing and wheat-growing regions tended to respond differently: Those from rice areas drew themselves as smaller (in relation to their friends) than wheat-area people did, according to research conducted by Thomas Talhelm of the University of Virginia and reported in The Atlantic. The finding suggests that rice farming, which requires extensive social coordination, creates a culture of collectivism. Talhelm surveyed 1,162 university students in China, where rice is grown mostly in the south and wheat is raised and eaten in the north.



Most Work Conflicts Aren’t Due to Personality
Conflict happens everywhere, including in the workplace. When it does, it’s tempting to blame it on personalities. But more often than not, the real underlying cause of workplace strife is the situation itself, rather than the people involved. So, why do we automatically blame our coworkers? Chalk it up to psychology and organizational politics, which cause us to oversimplify and to draw incorrect or incomplete conclusions.
There’s a good reason why we’re inclined to jump to conclusions based on limited information. Most of us are, by nature, “cognitive misers,” a term coined by social psychologists Susan Fiske and Shelley Taylor to describe how people have a tendency to preserve cognitive resources and allocate them only to high-priority matters. And the limited supply of cognitive resources we all have is spread ever-thinner as demands on our time and attention increase.
As human beings evolved, our survival depended on being able to quickly identify and differentiate friend from foe, which meant making rapid judgments about the character and intentions of other people or tribes. Focusing on people rather than situations is faster and simpler, and focusing on a few attributes of people, rather than on their complicated entirety, is an additional temptation.
Stereotypes are shortcuts that preserve cognitive resources and enable faster interpretations, albeit ones that may be inaccurate, unfair, and harmful. While few people would feel comfortable openly describing one another based on racial, ethnic, or gender stereotypes, most people have no reservations about explaining others’ behavior with a personality typology like Myers-Briggs Type Indicator (“She’s such an ‘INTJ’”), Enneagram, or Color Code (“He’s such an 8: Challenger”).
Personality or style typologies like Myers-Briggs, Enneagram, the DISC Assessment, Herrmann Brain Dominance Instrument, Thomas-Kilmann Conflict Mode Instrument and others have been criticized by academic psychologists for their unproven or debatable reliability and validity. Yet, according to the Association of Test Publishers, the Society for Human Resources, and the publisher of the Myers-Briggs, these assessments are still administered millions of times per year for personnel selection, executive coaching, team building and conflict resolution. As Annie Murphy Paul argues in her insightful book, The Cult Of Personality, these horoscope-like personality classifications at best capture only a small amount of variance in behavior, and in combination only explain tangential aspects of adversarial dynamics in the workplace. Yet, they’re frequently relied upon for the purposes of conflict resolution. An ENTP and an ISTJ might have a hard time working together. Then again, so might a Capricorn and a Sagittarius. So might any of us.
The real reasons for conflict are a lot harder to raise — and resolve — because they are likely to be complex, nuanced, and politically sensitive. For example, people’s interests may truly be opposed; roles and levels of authority may not be correctly defined or delineated; there may be real incentives to compete rather than to collaborate; and there may be little to no accountability or transparency about what people do or say.
When two coworkers create a safe and imaginary set of explanations for their conflict (“My coworker is a micromanager,” or “My coworker doesn’t care whether errors are corrected”), neither of them has to challenge or incur the wrath of others in the organization. It’s much easier for them to imagine that they’ll work better together if they simply understand each other’s personality (or personality type) than it is to realize that they would have to come together to, for example, request that their boss stop pitting them against one another, or to request that HR match rhetoric about collaboration with real incentives to work together. Or, perhaps the conflict is due to someone on the team simply not doing his or her job, in which case talking about personality as being the cause of conflict is a dangerous distraction from the real issue. Personality typologies may even provide rationalizations, for example, if someone says “I am a spontaneous type and that’s why I have a tough time with deadlines.” Spontaneous or not, they still have to do their work well and on time if they want to minimize conflict with their colleagues or customers.
Focusing too much on either hypothetical or irrelevant causes of conflict may be easy and fun in the short term, but it creates the risk over the long term that the underlying causes of conflict will never be addressed or fixed.
So what’s the right approach to resolving conflicts at work?
First, look at the situational dynamics that are causing or worsening conflict, which are likely to be complex and multifaceted. Consider how conflict resolution might necessitate the involvement, support, and commitment of other individuals or teams in the organization. For example, if roles are poorly defined, a boss might need to clarify who is responsible for what. If incentives reward individual rather than team performance, Human Resources can be called in to help better align incentives with organizational goals.
Then, think about how both parties might have to take risks to change the status quo: systems, roles, processes, incentives or levels of authority. To do this, ask and discuss the question: “If it weren’t the two of us in these roles, what conflict might be expected of any two people in these roles?” For example, if I’m a trader and you’re in risk management, there is a fundamental difference in our perspectives and priorities. Let’s talk about how to optimize the competing goals of profits versus safety, and risk versus return, instead of first talking about your conservative, data-driven approach to decision making and contrasting it to my more risk-seeking intuitive style.
Finally, if you or others feel you must use personality testing as part of conflict resolution, consider using non-categorical, well-validated personality assessments such as the Hogan Personality Inventory or the IPIP-NEO Assessment of the “Big Five” Personality dimensions (which can be taken for free here). These tests, which have ample peer-reviewed, psychometric evidence to support their reliability and validity, better explain variance in behavior than do categorical assessments like the Myers-Briggs, and therefore can better explain why conflicts may have unfolded the way they have. And unlike the Myers-Briggs which provides an “I’m OK, you’re OK”-type report, the Hogan Personality Inventory and the NEO are likely to identify some hard-hitting development themes for almost anyone brave enough to take them, for example telling you that you are set in your ways, likely to anger easily, and take criticism too personally. While often hard to take, this is precisely the kind of feedback that can help build self-awareness and mutual awareness among two or more people engaged in a conflict.
As a colleague of mine likes to say, “treatment without diagnosis is malpractice.” Treatment with superficial or inaccurate diagnostic categories can be just as bad. To solve conflict, you need to find, diagnose and address the real causes and effects — not imaginary ones.



May 19, 2014
An Introduction to Data-Driven Decisions for Managers Who Don’t Like Math
Not a week goes by without us publishing something here at HBR about the value of data in business. Big data, small data, internal, external, experimental, observational — everywhere we look, information is being captured, quantified, and used to make business decisions.
Not everyone needs to become a quant. But it is worth brushing up on the basics of quantitative analysis, so as to understand and improve the use of data in your business. We’ve created a reading list of the best HBR articles on the subject to get you started.
Why data matters
Companies are vacuuming up data to make better decisions about everything from product development and advertising to hiring. In their 2012 feature on big data, Andrew McAfee and Erik Brynjolfsson describe the opportunity and report that “companies in the top third of their industry in the use of data-driven decision making were, on average, 5% more productive and 6% more profitable than their competitors” even after accounting for several confounding factors.
This shouldn’t come as a surprise, argues McAfee in a pair of recent posts. Data and algorithms have a tendency to outperform human intuition in a wide variety of circumstances.
Picking the right metrics
“There is a difference between numbers and numbers that matter,” write Jeff Bladt and Bob Filbin in a post from last year. One of the most important steps in beginning to make decisions with data is to pick the right metrics. Good metrics “are consistent, cheap, and quick to collect.” But most importantly, they must capture something your business cares about.
The difference between analytics and experiments
Data can come from all manner of sources, including customer surveys, business intelligence software, and third party research. One of the most important distinctions to make is between analytics and experiments. The former provides data on what is happening in a business, the latter actively tests out different approaches with different consumer or employee segments and measures the difference in response. For more on what analytics can be used for, read Thomas Davenport’s 2013 HBR article Analytics 3.0. For more on running successful experiments, try these two articles.
Ask the right questions of data
Though statistical analysis will be left to quantitative analysts, managers have a critical role to play in the beginning and end of the process, framing the question and analyzing the results. In the 2013 article Keep Up with Your Quants, Thomas Davenport lists six questions that managers should ask to push back on their analysts’ conclusions:
1. What was the source of your data?
2. How well do the sample data represent the population?
3. Does your data distribution include outliers? How did they affect the results?
4. What assumptions are behind your analysis? Might certain conditions render your assumptions and your model invalid?
5. Why did you decide on that particular analytical approach? What alternatives did you consider?
6. How likely is it that the independent variables are actually causing the changes in the dependent variable? Might other analyses establish causality more clearly?
The article offers a primer on how to frame data questions as well. For a shorter walk-through on how to think like a data scientist, try this post on applying very basic statistical reasoning to the everyday example of meetings.
Correlation vs. cause-and-effect
The phrase “correlation is not causation” is commonplace, but figuring out just what it implies in the business context isn’t so easy. When is it reasonable to act on the basis of a correlation discovered in a company’s data?
In this post, Thomas Redman examines causal reasoning in the context of his own diet, to give a sense of how cause-and-effect works. And BCG’s David Ritter offers a framework for deciding when correlation is enough to act on here:
The more frequent the correlation, and the lower the risk of being wrong, the more it makes sense to act based on that correlation.
Know the basics of data visualization
Rule #1: No more crap circles. To decide how to best display your data, ask these five questions. Make sure to browse some of the best infographics of all time. And before you present your data to the board, consult this series on persuading with data. (Don’t forget to tell a good story.)
Learn statistics
A couple of years ago, Davenport declared in HBR that data scientists have the sexiest job of the 21st century. His advice to the rest of us? If you don’t have a passing understanding of introductory statistics, it might be worth a refresher.
That doesn’t have to mean going back to school, as Nate Silver advises in an interview with HBR. “The best training is almost always going to be hands on training,” he says. “Getting your hands dirty with the data set is, I think, far and away better than spending too much time doing reading and so forth.”
Persuading with Data
An HBR Insight Center

How Data Visualization Answered One of Retail’s Most Vexing Questions
The Case for the 5-Second Interactive
Generating Data on What Customers Really Want
10 Kinds of Stories to Tell with Data



When “Scratch Your Own Itch” Is Dangerous Advice for Entrepreneurs
“Scratch your own itch,” is one of the most influential aphorisms in entrepreneurship. It lies behind successful product companies like Apple, Dropbox, and Kickstarter, but it can also lead entrepreneurs predictably to failure.
This approach to entrepreneurship increases your market knowledge: as a potential user, you know the problem, how you’re currently trying to solve it, and what dimensions of performance matter. And you can use this knowledge to avoid much of the market risk in building a new product. But scratching your own itch will lead you astray if you are a high-performance consumer whose problem stems from existing products not performing well enough – in other words, if the itch results from a performance gap.
Building a company around a better-performing product means competing head-on with a powerful incumbent that has the information, resources, and motivation to kill your business. Clayton Christensen first documented this phenomenon in his study of the disk drive industry, and found that new companies targeting existing customers succeeded 6% of the time, while new companies that targeted non-consumers succeeded 37% of the time. Even with a technological head start, wining the fight for incumbents’ most profitable customers is nearly impossible.
An itch can result from two very different sources: existing products lacking the performance you need, or a lack of products to solve your problem. In the former case, you already buy products and will pay more if they perform better along well-defined dimensions. In the latter, products don’t exist at all or you lack access to very expensive, centralized products and so make due with a cobbled-together solution or nothing at all. It’s the difference between needing another feature from your Salesforce-based CRM system and spending hours and hours tracking information in Excel because you can’t justify the expense of implementing Salesforce in the first place.
Consider, for example, two successful companies that at first seem to result from performance-gaps: Dropbox and Oculus VR.
Dropbox began with the difficulty of backing up and sharing important documents, and developed a system that was easier to use than carrying around a USB stick and less expensive than paid services like Carbonite. Dropbox didn’t just set out to offer superior performance; it targeted an entirely new customer set that wasn’t using existing solutions, with a business model that would undermine the incumbents’ most profitable customers. Dropbox’s business model made head-to-head competition with incumbents unlikely, since the Carbonite’s of the world sensed that they would earn less off of their best customers if they offered a free service.
Oculus created a virtual reality headset designed to be a hardware platform, primarily focusing on gaming – and recently sold to Facebook for $2.3 billion. Although envisioned as a platform that would enable any kind of virtual reality application, Oculus was created with hardcore gamers in mind. Unlike Dropbox, Oculus’s first customers would have been the most profitable customers of existing game platforms, giving incumbents like X-Box and PlayStation a strong incentive to emulate Oculus’s technology to retain their best customers and make them even more profitable.
Oculus, of course, was wildly successful. But only because Facebook felt that, despite being developed with existing customers in mind, the technology would be appealing to non-gamers for the purpose of messaging and social networking. Facebook bought Oculus to rescue it from a flawed strategy by shifting its focus from high-end customers to non-consumers.
Oculus’s founder set out to scratch his own itch by creating a new gaming platform, one that targeted a customer set of hardcore gamers who were already served by incumbent firms. Dropbox’s founder scratched his own itch by creating a product aimed at a new set of customers, who weren’t being served by incumbents. The difference matters greatly in terms of a company’s competitive position.
Before founding a business around a problem you face, first understand whether that problem is a performance gap or a product void, by asking the following questions:
How am I currently solving this problem?
Do other products exist that solve this problem?
Do they provide good enough performance, or is there still a performance gap?
Are they too expensive to use? Are they centralized and do they require special expertise?
Would this product make any incumbent’s existing customers more profitable?
Ultimately, if your product would make an incumbent’s best customers more profitable, you should steer clear: Facebook won’t always be there to bail you out.



A Simple Tool for Making Better Forecasts
One of the most basic keys to good decision-making is accurate forecasting of the future. In order to bring about the best outcomes, a company must correctly anticipate the most likely future states of the world. Yet despite its importance, companies not only routinely make basic forecasting mistakes, they also shoot themselves in the foot by applying procedures that make accurate predictions harder to achieve.
The future, to state the obvious, is uncertain. We may want to know precisely what the future will hold, but we realize that the best we can settle for is having some idea of the range of possible outcomes and how likely these outcomes are. Yet most companies seem to ignore this fact and ask employees to provide point predictions of what will happen—the exact price of a stock, the precise level of growth of a country’s GDP next year, or the estimated return, to the dollar, on an investment.
These precise, single-value estimates are poor decision aids. Suppose the forecaster’s best guess is that a project will be completed within a year, but the second most likely outcome is that, if the judge denies a zoning appeal, the project will take more than two years to complete. No single point prediction can provide the decision maker with the essential information about what to prepare for.
One way to counteract this problem is to ask for range forecasts, or confidence intervals. These ranges consist of two points, representing the reasonable “best case” and “worst case” scenarios. Range forecasts are more useful than point predictions. But they run the risk of, on one hand, being so wide, including everything from total catastrophe to glorious triumph, that they are not very informative. On the other hand, it happens even more often that the range is drawn too narrowly, missing the true value. Forecasters often struggle with this accuracy-informativeness tradeoff, and attempts to balance the two criteria typically result in overconfident forecasts. Research on these types of forecasts finds that 90% confidence intervals, which, by definition, should hit the mark 9 out of 10 times, tend to include the correct answer less than 50% of the time.
In our research, we looked for a forecasting approach that could provide both accuracy and informativeness: one that will protect the forecaster from the known traps of overconfidence and biased forecasting, and provide an informative forecast that includes all plausible future scenarios as well as an assessment of how likely each one of them is. We have developed a method called SPIES (Subjective Probability Interval EStimates), which computes a range forecasts from a series of probability estimates, rather than from two point predictions.
You can experiment with a version of SPIES aimed at forecasting temperature below:
The rationale for calculating range forecasts this way is based on the finding that, while people tend to be overconfident in forecasting confidence intervals, they are much more accurate in evaluating other people’s confidence intervals and estimating the likelihood that a particular forecast will be accurate. So the SPIES method divides the entire range into intervals, or bins, and asks the forecaster to consider all of these bins and estimate the likelihood that each one of them will include the true value. From these likelihood estimates, SPIES can estimate a range forecast of any confidence level the decision maker prefers.
The SPIES method provides a number of distinct advantages for forecasters and decision makers. First, it simply produces better range forecasts. Our studies consistently show that forecasts made using SPIES hit the correct answer more frequently than other forecasting methods. For example, in one study, participants used both confidence intervals and SPIES to estimate temperatures. While their 90% confidence intervals included the correct answer about 30% of the time, the hit-rate of intervals produced by the SPIES method was just shy of 74%. Another study included a quiz of the dates in which various historical events occurred. Participants who used 90% confidence intervals answered 54% of the questions correctly. The confidence intervals SPIES produced, however, resulted in accurate estimates 77% of the time. By making the forecaster consider the entire range of possibilities, SPIES minimizes the chance that certain values or scenarios will be overlooked. Second, this method gives the decision maker a sense of the full probability distribution, making it a rich, dynamic, planning tool. The decision maker now knows the best- and worst-case scenarios, but also how likely each scenario is, and the likelihood that the estimated value, be it production rate, costs or project completion time, will fall above or below an important threshold.
How can you use SPIES to your advantage? Consider a manager who must decide how many units to produce. A forecast made with SPIES estimates the odds of all possible scenarios and thus assists the manager in mitigating the different risks of over- or under-producing. Similarly, a contractor could use SPIES to forecast the likelihood of finishing current work in time to take on more work, as well as the likelihood of progress on current projects slowing down, making any additional work a strain on resources.
With so many strategic decisions for firms depending on predicting the future, forecasting accurately is enormously important. While traditional forecasting methods tend to produce poor results, we are happy to report real progress helping people make better forecasts. The SPIES method represents a big step forward, integrating insights from the latest research results on the psychology of forecasting. Managers who receive richer and more unbiased information make better decisions, and SPIES can provide it.
You can use the SPIES tool below to try forecasts in the context of your own business:



Mixing Business and Social Good Is Not A New Idea
More and more business schools are building programs to support students who want to do business while doing good. In addition, companies are rebuilding their recruiting processes to attract top talent that cares less about financial rewards and more about social impact. It seems like every day I see a new blog post from a thinker or business leader discussing how to innovate a process or build a whole new business model to make a profit and a difference. One key theme in these conversations, even if unspoken, is that this is a new, unexplored territory for both entrepreneurs and established businesses.
It may be less explored in our post-1980’s, “greed is good” world, but it’s hardly new.
Business history is filled with stories of businesspeople who were determined to create a better life for their employees or their community. Less than 100 years ago, chocolatier Milton Hershey transferred his majority ownership of the Hershey Chocolate Company to the Milton Hershey School Trust, which provides education and homes to children from low-income families or with social needs. The Trust and the School still maintain the majority of voting shares in the company to this day. Instead of a company owning or funding a nonprofit foundation, Hershey built a model where the nonprofit essentially owns the company.
A few decades earlier, in Ireland, Arthur Guinness and the Guinness family began an ambitious employee welfare program unparalleled by any other business at the time. Even in the 1860s, the company provided pensions for employees, their widows and children, with free meals given every week to the sons of pensioners and widows in order to encourage the children to attend school. The company worked to improve the living conditions and tuberculosisoutbreak suffered by its employees, their families and the famine-ravished Dublin community. The company hired physician Sir John Lumsden and funded a two-month study of employees’ families before eventually building a program that included sanitary housing for employees, as well as nutrition and cooking classes, all aimed at not just improving employee health and productivity, but influencing the entire city to make the needed changes to improve overall health in the city of Dublin.
If you look even further into history, thousands of years earlier, you can even find this urge to care for the poor built into ancient societies and religions. In the ancient Hebrew tradition for example, farmers and landowners were encouraged to adhere to a principle called gleaning, outlined in the books of Deuteronomy and Leviticus in the Torah. According to the principle, farmers left the corners of their fields unharvested, in order to provide for the poor, widows, or wanderers who could come and harvest the remains. While the specifics around gleaning changed with time and cultures, all variations promoted the idea that owning an asset also carried with it the responsibility to care for those without such means. Thus, built into the business model of harvesting was a system for providing for the needs of the community.
These are just a few of the historical examples that give the lie to the notion that all a business exists to do is create a profit for its owners. There’s an old saying that “history doesn’t repeat itself but sometimes it rhymes.” Today’s upsurge of enthusiasm about creating shared value – not just shareholder value – is a great example of this phenomenon. Let’s not forget our history of kindness again.



The Value of Living at 888 Oak Street
In North American neighborhoods where more than 18% of the population is ethnically Chinese, homes with address numbers ending in 8 sell at a 2.5% premium, and those ending in 4 sell at a 2.2% discount, says a team led by Nicole M. Fortin of the University of British Columbia. The team, which studied areas around Vancouver, British Columbia, notes that 8 is considered auspicious because the word for it in Mandarin and Cantonese sounds like the word for prosperity; the Chinese word for 4 sounds like the word for death. It’s unclear whether the lucky-number effect extends to second-generation Chinese.



The Human Element in Digital Prototyping
Given that few products and services these days exist without a digital component of some sort, companies are increasingly being required to do live prototyping of a digital solution. There’s a common perception that product development for digital solutions is easier than it is for physical ones. While this can be true, we’ve seen many digital solutions fail because the product development process is too far removed from the user and lacks a human touch.
It’s instructive to understand the inherent advantages in product development for digital solutions. Several factors drive the perception that it’s easier than it is for physical ones, among them:
User expectations. As landing page and vaporware testing has gone mainstream and the notion of “always in beta” has become part of some companies’ brand equity, users have become very accepting of online solutions that are “fake” or still very rough around the edges.
Ease of testing. As the cost of operating online (i.e., development, bandwidth, fulfillment, etc.) decreases exponentially, companies are able to launch solutions and conduct testing (A/B testing, cohort analysis, funnel analysis, etc.) on the cheap, and to do so with increasing speed.
Data on demand. A whole suite of online analysis and behavior measurement tools has emerged in recent years (Google Analytics, Mixpanel, etc.) to provide increasingly sophisticated intelligence capabilities to those designing digital solutions, allowing for better informed decision-making.
But digital prototyping also carries several risks. It lacks personal interaction and can cause those designing new solutions to underestimate the value of understanding the underlying human psychology and emotion that shapes online behavior. Meanwhile, the ability to measure and track every click and scroll can make the product development process seem deceptively linear and mechanistic. Decisions that should be made based on a combination of intuition, human empathy, and behavioral data, end up being made based solely on data, and often transactional data (sign-ups, referrals, payment, etc.) at that. A/B testing, and its counterparts, are fine for tweaks, especially when informed by good design sense. But, for bigger adaptations and new products, we can’t expect our ideas to resonate perfectly based on the first iteration, so we need deeper insight to understand the “why” around data.
If you’re designing and prototyping digital services and experiences, you’ll want to avoid these pitfalls. Here’s how:
Start with options. Evaluating a single solution in isolation is difficult. Imagine you’re a financial data company trying to develop an online community for traders. What might motivate these users to collaborate and contribute to a community? It might be social status. It might be a shared challenge. It might be altruism. If you launch solutions that support only one of these motivations, you can easily measure whether the solution’s been adopted, but the underlying “why” is much harder to identify. If on the other hand you launch solutions that support each of these motivations, you can better triangulate the true motivation for collaboration (probably some combination of the motivations you identified). Uptake based on the value proposition of Option A, drop off based on the interaction flow of Option B, and high conversion based on the revenue model of Option C, can point you to the optimal solution. An options-based approach will provide you with a basket of signals to drive a richer or deeper interpretation.
Be thoughtful about sequencing. Users are only willing to accept so many failures before they begin to question a solution’s trustworthiness (not to mention that some failures are more trust-eroding than others). Suppose the same trading platform company is building solutions for the ‘social status’ motivation. There are many solutions that might address this motivation–an influence score, voting buttons, or the ability to re-market ideas. The influence score would probably be the wrong solution to start with because a score is very difficult to take-away from users (especially your power users, who are presumably the ones earning the highest scores) if it turns out that social status is not what motivates collaboration. While A/B testing treats solutions as interchangeable and disposable, the reality is that there are some changes that users won’t forgive you for undertaking, even if you are one of those companies that is “always in beta.” The sequencing of potential solutions needs to be thoughtful and intentional to avoid completely disrupting the user experience.
Disaggregate the drivers. Behavior measurement tools often tend to roll-up a series of choices and behaviors into a single data point, which can be misleading. Suppose the financial data company is designing solutions for the “shared challenge” motivation. They can build a feature that organizes users into teams to provide them with a better sense of who their collaborators should be, in turn spurring increased collaboration. On the other hand, it might be the case that better role definition is all that’s needed to catalyze collaboration. If the company illuminates the contributions that users are making (generator vs. builder vs. critic), users will begin to organically collaborate, absent of a team structure. In this scenario, all that’s required to motivate collaboration is better role definition, rather than better role definition and team structure. If the company jumps right to launching a team feature, they may craft the wrong solution for the motivation they’re attempting to address.
Go deeper with a subset of users. No matter how well you structure digital prototyping, it still happens at arm’s length from your users and a lot can get lost in translation. For this reason, it’s often a good practice to have live conversations with a subset of your users. This might consist of a more typical usability test (sitting side by side with your users as they click through the experience) or can happen through more lightweight means, like a Google Hangout or Skype interview. We call this process hybrid research. Look at the behavioral data in totality, then drill down and go deeper with those demonstrating extreme, anomalous or unexpected behavior. These users often amplify behavior that is happening in more subtle ways among more “regular” users.
We’re big believers in prototyping, whether it’s in a digital or analog context. However, the process must be structured to generate an understanding of the human psychology and emotions that cause users to behave the way they do, which can be challenging in a digital context. Don’t become too distanced from your users and keep in mind that these individuals have (in most cases) signed up to use your products as users, not as test subjects. Make the most of the benefits afforded by the internet and its myriad tools, but don’t allow it to crowd out empathy.



May 16, 2014
Does the New York Times Know How to Fire Someone?
As media observers explore every angle of Jill Abramson’s unceremonious sacking from the New York Times, one key Management 101 question is whether she was given a fair chance to address the management issues that, according to Times publisher Arthur Sulzberger, Jr, led him to dismiss her.
On the day after Abramson was fired, officials at the Times countered reports that she had angered Sulzberger with demands to increase her salary – which she alleged was lower than her male predecessor’s — reemphasizing that the real problems were her leadership style and newsroom management. Said Sulzberger in a letter to staff: “The reason — the only reason—for the decision was concerns I had about some aspects of Jill’s management of our newsroom, which I had previously made clear to her, both face-to-face and in my annual assessment.”
But how much time was Abramson given to address those concerns? She was only three years into her tenure as executive editor. By most accounts, she was an outstanding journalist and a talented editor with a strong sense of journalistic integrity and independence. Yet her tenure was far shorter than most Times executive editors (and her summary dismissal far more abrupt).
Being a leader of any organization “ain’t beanbag.” Hard decisions are made, sometimes quickly. The troops often complain about the general. Not every decision works out. Words giving orders or making decisions aren’t always warm and cuddly. This is pretty standard stuff in a big organization staffed by grown-ups.
So what were the issues with Abramson’s management style, in Sulzberger’s view, that justified terminating her? Why were they considered a firing offense? And, most importantly, was Abramson ever given the time and resources to address them? This is not the only question in this story, but it is a key one.
Some media reports have noted that Abramson had hired a consultant to improve her management style; when was that coaching started? Were there improvement milestones she failed to meet? When was Sulzberger’s annual assessment written, and what did it say? Ken Auletta at The New Yorker has published an email from Times CEO Mark Thompson sent on April 28, just a couple of weeks before Abramson was fired, in which he recounted praising her performance to a new recruit and said he hoped Abramson would stay on for years. What happened in those two weeks to change his mind?
In many organizations, top leaders have issues that emerge from 360-degree reviews or other systematic (and fair-minded) assessments. Often, boards or senior executives try to be clear about areas where a leader needs to improve. Assuming that leader has other very positive qualities (clearly so in Abramson’s case), he or she is then given a real opportunity over a reasonable period of time to address those issues, customarily with some guidance or assistance. And given that Abramson had worked for the paper for 14 years before her promotion to Editor, it would seem unlikely that her flaws – everyone has some – would be totally unknown to her bosses.
If the Times and Sulzberger want to make “management” the issue, then they have an obligation not to hide behind a vague sentence or phrase — or a non-disparagement agreement — but to explain what the problems were and whether Abramson was given a fair chance to correct them. At this point, with the expected media scrum — and with important issues very much in the public eye — Abramson shouldn’t be complicit in the silence.
This is a big story about an important event at a national institution: The firing of one of the most senior leaders in journalism. The Times would never let another major institution off the hook with such a cursory account of its reasoning.



Who’s Afraid of Data-Driven Management?
From a management perspective, making decisions based on data is a clear win. Yet it’s often difficult to adopt a data-informed culture. In every organization, there are teams and employees who embrace this transition, and those who undermine it. To convert your biggest data skeptics, the first step is to understand the psychology of their resistance.
A data insight without a subsequent action is like a key without someone to turn it: worthless. A good data scientist can identify which coworkers will use insights from data to open new doors for the business, and which will continue to rely on intuition. This is because employees who act on data will do so for two main reasons: to improve their perceived or actual performance. From our perspective, there are four types of employees in any organization:
(1) Highly regarded, high performing
(2) Highly regarded, low performing
(3) Lowly regarded, high performing
(4) Lowly regarded, low performing
And their willingness to embrace data looks something like this:
Now intuitively, you would think that the first group (high-high, your overachieving all-stars) would be the easy converts to a data-informed culture; of course they’ll want the best tools and analysis at their disposal. But in our experience, the high-highs are the most likely to be data skeptics. Quantifying their domain and performance offers little upside. They are already perceived as doing quality work; adding hard numbers can, at best, affirm this narrative, and at worst submarine the good thing they have going. There is a reasonable fear that the outputs used to measure their performance will not fully capture the true value of their contributions. Skepticism is especially strong in any workplace where attribution is difficult (think marketing and media).
But, this group can be convinced: involve them early, give them a voice in creating the new metrics that will underpin the data-informed culture, and give them opportunities to push back. These efforts can make the data culture feel like their creation, not something that was forced upon them.
Your main challenge lies next down the list: the high-lows. These are your data antagonistics. Coworkers love them, but deep down they always fear they will be found out. Their ideas are occasionally fantastic, but too often they are just shooting in the dark. When things go right, they are never exactly certain why (their instincts are just that good), and when things go wrong, they instinctively turn to ass-covering mode. Quantifying their work (on someone else’s terms, no less) only has downside. Swinging for the fences every at bat is great, until the manager and fans learn to calculate (and value) on base percentage. Then, 30 home runs with a .150 OBP is no longer getting the job done.
There’s not a lot that can be done for this group. The malleable ones will eventually come around, but those stuck in their heuristical ways will undermine and cavil the creeping in of a data-informed culture.
After this group, you have the low-highs. This group will be your biggest champion. Too long have they toiled on the lower reaches of the totem pole. Giving these overachieving, underappreciated employees the information and framework to make their work comparable — to allow their true value to be understood — provides only upside. Give this group early wins by focusing on tying their outputs to organizational success. They will love you for it, and they will help promote your cause. And senior management will be impressed.
This brings us to the last group, the low-lows. They aren’t going to fight data culture. Or embrace it. They’ll simply turn their heads 10 degrees and think: data? In general, low-lows either swim with the current, which means they’ll come around when coming around is the safe thing to do, or against the current, meaning they won’t be around long enough to matter.
Data-informed decision making, and the culture change inherent therein, doesn’t happen in a vacuum. Asking what do the data say before acting is a disruptive action, displacing prior norms. There will be employees like the low-highs who welcome this kind of change, and those like the high-lows who subvert it. Understanding the psychology underlying these behaviors is the necessary first step toward pushing past intuition and silencing the data skeptics.
Persuading with Data
An HBR Insight Center

How Data Visualization Answered One of Retail’s Most Vexing Questions
The Case for the 5-Second Interactive
Generating Data on What Customers Really Want
10 Kinds of Stories to Tell with Data



Marina Gorbis's Blog
- Marina Gorbis's profile
- 3 followers
