Jeffrey Pfeffer's Blog, page 3
August 10, 2015
Everything we bash Donald Trump for is actually what we seek in leaders
In the spring of 2014, I turned in a book manuscript about leadership that, because of the turmoil within the publishing industry, will only be published next month. In the index for that book: entries for Donald Trump and Carly Fiorina.
I wish I could say I was prescient about the unfolding race for the Republican nomination, but I wasn’t even thinking about . Instead, I was trying to address a topic that’s vitally important to individuals who want to thrive in today’s intensely competitive work world: the enormous disconnect between the leadership prescriptions regularly offered to an unsuspecting public by the enormous leadership industry and what social science and everyday observation suggest is the best path to individual success. For the most part, real-world success comes from behaviors that are precisely the opposite of typical leadership prescriptions.
So, no, you don’t have to look to angry, disaffected voters to explain the Trump phenomenon. Trump actually embodies many of the leadership qualities that cause people to succeed—albeit they are pretty much the opposite of what leadership experts tout. Here are a few examples.
, including his buildings, and touts his success at every opportunity, behavior that contradicts both the common-sense belief that we prefer people who don’t self-promote and research that best-selling author Jim Collins published in Good to Great. Collins noted that the most successful companies were run by so-called “Level 5 leaders,” who had both fierce resolve but were modest and self-effacing. What gives?
Numerous studies show that narcissism, not modesty, and self-confident, even overconfident, self-presentation lead to leadership roles. This is partly because to be selected, you first need to be noticed. There is also the “mere exposure” effect. We prefer what feels familiar to us, and after endless repetitions of the name “Trump,” it certainly feels familiar. And even though we say we want people who don’t self-aggrandize, we secretly like the confident, overbearing people because they provide us with confidence—emotions are contagious—and also present themselves like winners. We all want to associate with success and pick those who seemingly know what they are doing.
Trump also takes liberties with the facts. No, he did not write the best-selling business book of all time, as he claimed. And some aspects of his business acumen and success are clearly exaggerated—after all, Trump-named casinos went into bankruptcy. No matter. Telling the truth is an overrated quality for leaders. Leaders lie with more frequency and skill than others. Some of the most revered and wealthiest people mastered the skill of presenting a less than veridical version of reality. Larry Ellison, like many people working in software, exaggerated the availability and features of products. And then there’s Steve Jobs. The phrase “reality distortion field” says a lot about Jobs’ fabulous ability to make things that weren’t true become true through his assertions of their truthfulness, a widely known process called the self-fulfilling prophecy.
And it’s not just Trump. Carly Fiorina exemplifies another trait I see among the most successful—not admitting to setbacks and presenting a positive spin on every aspect of one’s career. Watching her you wouldn’t know that she was forced out of her CEO job at Hewlett-Packard; presided over HP’s acquisition of Compaq, thereby cementing the company’s leadership position in a dying, low-margin business that is now being spun off; and orchestrated the layoffs of tens of thousands of employees. Accounts repeated often enough become taken as truth. And in any event, leaders will get enough criticism and second-guessing without doing it to themselves.
Starbucks CEO Howard Schultz’s recent call for servant leaders is well intentioned. But at a time when CEO salaries have soared to more than 300 times that of their companies’ average employees, there’s not too much servant leadership going on.
Why is there such a disconnect between prescriptions for what people should do and what really produces career success? Sociobiology and social psychology have recognized for decades that what is good for the individual is not necessarily what is good for the group, and vice versa. Group and individual success are not highly related. Case in point: Fiorina left HP with an enormous severance package when she was ousted, but no, the company’s stock price didn’t flourish nor did the thousands laid off by her and, for that matter, by her successors.
Another piece of the puzzle: most leadership talks, books, and blogs describe aspirational qualities we wish our leaders possessed. So we tell stories about unique, heroic, unusual people and situations—not quite realizing that the very uniqueness probably makes such tales, even if they are true (and they are often not), a poor guide for coping with the world as it exists.
My recommendation? First, understand the social science that speaks to the qualities that make people successful, at least by some definitions: the economic penalties, particularly for men, from being too nice; research that shows that lying in everyday life is both common and mostly not sanctioned; and the evidence that narcissistic leaders in Silicon Valley earn more money and remain longer in their CEO roles. The only way to change the world is by first understanding how it really works.
And second, we should take a hard look at our own behavior; how we are complicit in producing leaders of precisely the type we say we don’t want. It is only when we stop making excuses for what Claremont business professor Jean Lipman-Blumen has appropriately called “toxic leaders” that things will change.
In the meantime, my prediction: Donald Trump is going to dominate the polls and the nomination contest a lot longer than most people expect. Because he has many of the leadership characteristics we say we abhor even as we reward them.
(This post was originally published on Fortune on August 7, 2015)
The post Everything we bash Donald Trump for is actually what we seek in leaders appeared first on Jeffrey Pfeffer.
July 31, 2015
The case against the ‘gig economy’
When I first met Andrew Berlin at a Stanford executive program in the early 1990s, Berlin Packaging was a small enterprise doing maybe $40 million in sales in what was then, and still is, a very tough, almost commodity-like industry. Today, the packaging company brings in close to $1 billion in annual revenues, Andrew Berlin has an ownership interest in the Chicago Cubs and a World Series ring from his past ownership interest in the White Sox, and he runs a company growing earnings at a low double-digit rate that achieves good margins.
Andrew Berlin has made a fortune by building a competitive advantage through his company’s culture and his people. Having recently joined Berlin’s board, I could anticipate Andrew’s answer to my question, “why have employees?” First of all, he said with a chuckle, “I like my employees.”
More fundamentally, Berlin said that his company is special because he has employees who are interested in both beating the competition and delighting others. “Contractors would not have the same level of commitment,” he added. If organizational culture is a key to competitive success, you can’t just turn that over to someone else.
Ed Ossie, COO of insurance software company Majesco, also believes in the importance of culture and people for business success. When I asked him that same question—why have employees in a business where outsourcing and contracting is common—his reply focused on the connections between people and profits. “Talented, trained employees treated well by their employers will treat customers well and deliver exceptional service,” Ossie said. “Customers will remember our employees delivering exceptional service and will find reasons to extend the commercial relationship with our company.”
Such perspectives are exceedingly rare as the U.S. and other countries confront what has been called the “gig economy,”and as notions of what it means to have a job change in profound ways.
Freelance Nation: Population unclear
Nothing about the new employment arrangements lacks controversy, including how significant such contingent work is. A recently published piece in The Wall Street Journal noted that official U.S. government data show no dramatic changes in either the percentage of Americans who say they are self-employed (only about 6.5%) or who say they are working multiple jobs.
A response to the WSJ piece noted that estimates of the proportion of people in several major cities who received 1099 (miscellaneous income) forms ranged from 10% to 20%, and that the proportion of people filing Schedule C (self-employment income) forms with their income tax returns had increased in the recent past. These trends were taken to imply a rise in self-employment or at least an increase in the percentage of people receiving self-employment income. A 2015 survey of over 1,000 American workers noted that about 60% received 25% or more of their income from freelance work. That same report maintained that about 34% of workers had freelance jobs. And financial software company Intuit estimates that this proportion will grow to some 40% by 2020.
The U.S. Government Accountability Office (GAO) attempted a study in 2000 of contingent work arrangements in response to a request from Congress. Among its conclusions: the percentage of the U.S. workforce that could be characterized as contingent or on-demand ranged from 5% to 30%, depending on the precise definitions used. No wonder there is so much disagreement and confusion!
The Gallup organization has studied what it calls the payroll to population ratio (P2P), the percentage of the adult population that reported being employed full-time. For 2014, Gallup reported that this ratio had remained unchanged worldwide at about 26%. And lest anyone think that freelancing is a sign of a robust economy, Gallup’s report noted that “the countries with the highest P2P rates tend to be some of the wealthiest,” while countries with lots of self-employment and few people working full time included places like Mali, Niger, Liberia, South Sudan, and Sierra Leone.
What is not in dispute is that the proportion of contractors, freelancers, and part-time, contingent workers in the U.S. has been increasing and has been for a long time. More than 25 years ago, James Baron and I published a paper on the trend to “take workers out” of their companies, documenting the trend—and its causes—of companies using more part-time people, temporary help, and outside contractors in place of full-time, regular employees.
Speaking of companies, they seem to be going the way of full-time jobs—disappearing. Gerald Davis, a business school professor at the University of Michigan who is writing a book about the disappearance of the corporation and its implications for society, told me that since 1997, the number of public corporations in the U.S. has fallen by more than half and that the “going public” fad of the 1990s is long gone. There were about one-third the number of IPOs in 2014 as there were in 1999. And it’s not just publicly traded corporations. As economist Richard Florida wrote in 2014, the rate of new business formation has declined by almost 50% since 1978.
Moreover, Davis argued that the new companies being formed may make tons of money for their founders and early stage investors, but they produce precious few real jobs. According to Davis, as of December 2014, Uber, which had a reported enterprise value of $50 billion, had about 2,000 employees but more than 160,000 “driver-partners” in the U.S. alone, while Netflix employs a small fraction of the number of employees that used to work in the company it supplanted, Blockbuster.
What “gig workers” stand to lose
In the U.S., for good or bad, many benefits and social assistance programs are largely, though not exclusively, handled by employers. The U.S. is unique among advanced industrialized countries in not offering universal health care coverage. Instead, coverage decisions are at employers’ discretion. According to the Kaiser Family Foundation, even though employer-provided health insurance coverage has declined, about 58% of the nonelderly population still received employer-sponsored health insurance coverage in 2013.
A similar situation holds for pensions. Even though employers have moved aggressively from offering defined-benefit to defined contribution plans since 1998, most large employers still offer some form of retirement plan. Employers make contributions to Social Security for their employees, and while the self-employed are required to contribute, their contribution rate is less than that of the combined employee-employer rate.
Unemployment insurance, an important countercyclical source of income, is funded through levies on employers based on their number of employees and the stability of employment. Employers that have resorted to significant layoffs and firings often face higher contribution rates. Similarly, workers’ compensation benefits for on-the-job injuries rely mostly on experience-based employer insurance contributions. And many employers provide disability coverage, which can be an important supplement to disability payments provided through Social Security.
In addition, employers, particularly large employers, often offer various forms of assistance, including wellness programs to help employee manage their weight and stress, smoking cessation programs, and other forms of assistance for mental health, alcoholism, and addiction issues.
What happens to these benefits when fewer people hold traditional working relationships with employers? Individuals will have to shoulder significant risk, and taxpayers will likely end up paying for these services. Inadequately paid employees already benefit from public assistance—which has led to a conservative case for raising the minimum wage so the state doesn’t wind up subsidizing low-wage employers. People who can’t afford to retire or lack needed benefits are more likely to wind up on some form of public assistance.
The GAO report on the contingent workforce presents a series of dire, but not surprising, findings, including, “contingent workers are also less likely … to receive health insurance and pension benefits” and that such workers are generally not covered by wage and hour regulations, which include overtime pay and limits on hours worked, designed to protect workers.
Although contingent work is often promoted for the flexibility it offers, a study of reasonably well-paid technical contractors in Silicon Valley found that such latitude was a mirage. Because contractors need to maintain their reputations in the market, because they often bill by the hour and are chronically aware of what downtime costs them, and because of the nature of the work itself, “markets place more rather than fewer constraints on workers’ time.”
Moreover, much research shows that economic insecurity has adverse health effects. One study, using data from the Panel Study of Income Dynamics, found that the negative health effects associated with job loss persisted even after workers found new jobs.
The heart of the matter
Worker well being is often ignored in discussions that emphasize productivity, profitability, economic growth, and similar concerns. In response, Pope Francis “has defined the economic challenge of this era as the failure of global capitalism to create fairness, equity and dignified livelihoods for the poor.”
Of course, taking care of human beings and turning a profit are not mutually exclusive goals. Robert Chapman has run privately owned manufacturing firm Barry-Wehmiller for years, enjoying a compounded annual return in profits of 16% annually. Chapman believes that “leadership is the stewardship of the lives entrusted to us,” and those lives include his employees. According to Chapman, having employees offers “a chance to profoundly affect the lives and wellbeing of others.”
As the cases I have discussed make clear, business success comes from having something that others cannot readily imitate. What a company can buy on the open market, others can too. That is why successful executives ranging from Bain consultant Fred Reichheld to Men’s Wearhouse founder and former CEO George Zimmer have long emphasized the powerful relationship between cultivating talented, loyal employees and having dedicated, long-term customers.
The so-called “sharing economy” has been better at disbursing pain and economic stress than in providing people with good jobs and stable incomes. People are better off as employees, covered by employment protections, offered benefits, and, most importantly, having both greater income security and the benefit of being affiliated with an organization and fellow employees who can provide social support. Society is better off not having to shoulder the burdens offloaded by businesses that do not provide steady incomes or benefits to the people who do their work. And companies can benefit from having committed employees who can provide the customer service and innovation that leads to success.
Beyond the issue of costs and benefits, there is the matter of human dignity and well being. This is the concern that animates the current Pope and many of his predecessors. It is something that should concern everyone, regardless of their religion. All faiths value human life. So should we all.
(This post was originally published on Fortune on July 30, 2015)
The post The case against the ‘gig economy’ appeared first on Jeffrey Pfeffer.
July 15, 2015
Why disability rights is everyone’s business
Last May, suffering from a back problem that would require surgery by early July, and facing the prospect of climbing up long and steep steps to board a British Airways 747 flight from London to San Francisco, I requested assistance. I didn’t get any help boarding the flight—not even someone to help me haul my carry-on up the stairs.
When I filed a complaint with the Civil Aviation Authority of the United Kingdom, I learned that I should have “pre-notified” the carrier that I needed assistance (assistance I did not foresee needing as I would have had no trouble had the flight boarded through a jetway like at a normal first-world airport). Heathrow Airport responded similarly—it was my fault for not anticipating a possible obstacle. Omniserv, the outsourced provider of transportation services for the physically challenged at Heathrow, sent me a nice e-mail that again invoked the requirement for pre-notification. But Omniserve went on to tell me that even if I had done everything I should have, “factors such as late or early flights, queues at Immigration or Security,” and the many other operational difficulties Heathrow Airport and BA face meant that accommodation and assistance is not assured.
A complaint to the U.S. Department of Transportation resulted in a finding that there was a violation of the Air Carrier Access Act. But that result wasn’t going to help me or anyone else in a similar situation. When the Americans with Disabilities Act was passed by Congress 25 years ago, the airlines got themselves exempted. Instead, airlines are covered by the Air Carrier Access Act of 1986. This law is much more lenient than the ADA, because airlines are freed from being subject to a law that permits wronged individuals or groups of individuals to file private lawsuits when their rights are violated.
My job at Stanford Business School has brought me into contact with some amazing people who have endured substantial challenges. Those experiences coupled with my own Heathrow-BA disaster caused me to ponder: how it could be that in the 21st century, with all of the various laws and regulations that are supposed to make it easier for people with physical challenges to avail themselves of as many domains of modern life, including air travel, as possible, could my experience occurred? Here’s what I found out, and more importantly, why everyone should care about the enforcement of disability rights.
It will eventually happen to you
If you or your parents and other loved ones have the good fortune or good genetics to live long enough, the odds of your facing some form of physical limitation will go up substantially. Disability is, unsurprisingly, age related. For the non-institutionalized population in the United States, just 5.5% of people aged 16-20 reported a disability in 2012, the most recent year for which data are available. For people 65 or older, the comparable percentage was 35.8%, and for people over 75, some 50% of individuals reported some degree of disability. Nor is this a finding unique to the United States. In a study of 59 countries, the data showed that while 8.9% of people in the 18-49 year old age group reported having a disability, the corresponding prevalence estimate was 38.1 percent for people over 60. Aging makes disability concerns almost universal.
Disability is not some rare condition that affects few people. As a press release from the U.S. Bureau of the Census noted, in 2010, “56.7 million people—18 percent of the population—had a disability.”
It’s a huge problem
If you think that laws and regulations forbidding discrimination or the evolution of social values that advocate greater acceptance for people with disabilities have solved most of the issues concerning individuals’ rights, I have news for you: you are wrong.
Consider my example, air travel. In 2004, the Department of Transportation received 11,519 complaints covering all air carriers flying into the US. By 2012, less than a decade later, the number of complaints had soared, more than doubling to some 25,246.
While access to public places such as restaurants and stores has clearly gotten better, even 25 years after the passage of the ADA, access remains a problem for many, as evidenced by the growing number of suits filed. A Wall Street Journal article in October 2014 noted that the number of lawsuits filed under a provision of the ADA establishing “accessibility requirements for businesses and other public places” increased by some 55% between 2013 and 2014.
And no, the soaring number of suits is not from overzealous attorneys seeking individual enrichment. The complaints of the disabled are far from frivolous. For instance, Larry Paradis, executive director and co-founder of the public interest litigation firm Disability Rights Advocates, told me that just 15% of New York subway stations have elevators, thereby making it impossible for people in wheelchairs or, for that matter, anyone who can’t navigate steps to use 85% of the system’s stations. The Bay Area Rapid Transit System (BART), which carried its first passengers in the early 1970s, was originally designed without disability access. Although elevators were eventually added in all BART stations, many of them are not located in the center of the stations, so maintenance and cleanliness remain problematic.
Moreover, the growth of Uber, Lyft, and similar ride-sharing services has made transportation for the disabled even more problematic. While taxi companies, bus systems, and other common carriers face legal requirements to provide some access for people with limited mobility, so-called platform companies that merely connect drivers with passengers and whose terms of service specifically state that the companies are not in the transportation business have told Paradis and his legal colleagues that disability rights regulations do not apply to them.
Then consider the case of employment. Providing people with disabilities better access to jobs is something everyone should support. Conservatives should like providing more job opportunities, as it permits people to earn money to support themselves and thus diminishes their need for public assistance. Liberals should be in favor of more job opportunities, as work can provide people not just with money but with dignity and the sense of self-worth that comes from making valued contributions to society.
Unfortunately, barriers, stereotypes, and prejudice continue to confront people with physical challenges. As a consequence, only a small proportion of disabled individuals is able to participate in the workforce. According to an online disability statistics data search tool maintained by Cornell University, thirty years ago, 34.6% of people between the ages of 18 and 64 who had a work limitation reported having worked 52 hours or more in the prior calendar year. In 1990, the year the Americans with Disabilities Act passed, that proportion reached a high water mark, with 39.4% of people with some work-related limitation working more than 52 hours. But by 2013, the latest year for which data are available, just 21%, barely more than one out of five disabled people, worked as much as 52 hours (one hour a week) or more in the prior year. This means that the proportion of people with disabilities in the labor force has actually declined by almost 40% in less than 30 years.
Some argue that organizations that advocate for the disabled are powerful, but in 2013, the U.S. Senate failed to ratify the “United Nations Convention on the Rights of Persons with Disabilities.”
Is there any hope?
A United Kingdom task force on disability rights described the current situation well: “Disabled people are one of the most disadvantaged groups in society.” Although there are laws designed to provide people facing temporary or permanent physical challenges appropriate accommodations and assistance, “gaps and weaknesses leave disabled people without comprehensive and enforceable civil rights,” rendering many of them unable to fully participate in activities ranging from employment to air travel free of discrimination.
But this situation can be changed, as I have learned from watching Ben Foss over the years. Ben has dyslexia and has earned both an MBA and law degree from Stanford. Foss founded an organization, Headstrong Nation, to help parents and dyslexic children thrive in an educational system that has not offered legally required assistance. The motto of the organization is informative: “Dyslexia is not a disease. It’s a community.” Foss also wrote a book, The Dyslexic Empowerment Plan, that helps dyslexics and their family and friends understand their rights and what actions they can and should take to get the resources and assistance necessary to facilitate their education. While 15% of U.S. students are dyslexic, according to Foss, only 3% of people graduating from a four-year college are as many dyslexics fall by the wayside in the educational system. How unfortunate, as one-half of NASA employees are dyslexic and the ranks of the dyslexic include Charles Schwab, Richard Branson, the head of the Cleveland Clinic, and noted attorney David Boies, among many others.
Foss told me that private legal action has been essential to improving the lives of the disabled. Particularly in California, where disability lawsuits can provide not just injunctive relief to remedy conditions but also fines and legal fees, the private bar has been instrumental in encouraging companies to ensure the rights of the disabled. Organizations such as the National Federation of the Blind that forcefully advocate for the disabled have helped educate companies and influenced their policies to make their products more user-friendly. But as smartphones and tablets are increasingly prevalent and necessary for employment and commerce, upgrades to applications do not always ensure disability access gets retained.
As Ben Foss commented, it is ironic that, as a country, we send people to fly military missions overseas but, if they come back disabled, they are often unable to easily fly for business or to see family and friends back in the U.S.
With enough public pressure and understanding from people who recognize that someday they and their loves will face these challenges, maybe there’s hope for meaningful change. After all, offering help to those who need it is the humane and compassionate, as well as legally required, thing to do.
(This post was originally published on Fortune on July 14, 2015)
The post Why disability rights is everyone’s business appeared first on Jeffrey Pfeffer.
June 25, 2015
Startups and Uber-like jobs will not solve America’s employment woes
Americans are a positive and optimistic lot, emphasizing that people need to go out and solve their own problems. That’s why there are myriad stories about Detroit’s comeback. And that’s why there are narratives like the following: “technology companies have contended that their virtual marketplaces, in which people act as contractors and use their own possessions to provide services to the public … afford workers flexibility and freedom,” writes The New York Times.
Of course, the California Labor Commissioner’s recent ruling that Uber drivers should be classified as employees has sparked renewed discussion as to whether being a contractor—and having access to no benefits, ranging from unemployment compensation, to worker’s compensation, to employer-provided health insurance—is actually such a good thing for workers.
Whatever the situation for workers, venture capitalists like this business model. In fact, they have invested more than $9 billion in so-called “on demand” companies since 2010. Clearly, there are going to be many more free agent opportunities ahead.
Meanwhile, we worship entrepreneurs and entrepreneurship, which is touted along with free agency as the answer to almost all our problems. Entrepreneurship is framed as the solution to almost everything ranging from poverty to women’s economic empowerment.
I have great respect for entrepreneurs and for the many people cobbling together a living delivering groceries, packages, and people. But before we send everyone to entrepreneurial training and encourage even more people to try and climb the economic ladder doing contract labor, we need to get a few facts straight.
Much of our policy discussions about income and unemployment proceed more from urban legend than data.
No, entrepreneurship has not soared over the past two decades. The Kauffman Foundation’s index of entrepreneurial activity shows no change in startup activity over the last 16 years.
No, there aren’t lots of people now working in new, young companies. According to the Bureau of Labor Statistics, in March 1994, there were some 4.1 million people working in companies less than one year old. The comparable figure for March 2014: 2.9 million. And the decline in the number of employees at younger companies holds for slightly older establishments as well.
The proportion of people working for the very largest companies, such as the Fortune 500, remains at about 17%. And that percentage has “been relatively constant over the last 20 years.”
People who work in larger enterprises earn more on average than their small-firm counterparts. According to BLS data from 2006, people who worked in establishments with more than 500 employees earned about 45% more than those working in places that had 49 or fewer people on their rolls. And what’s true in the U.S. also holds for the 27 countries in the European Union, has been true over the years, and continues to be the case.
A peer-reviewed academic study using data from Denmark concluded that entrepreneurial establishments accounted for “about 8% of total gross job creation in the economy,” and that “jobs generated by entrepreneurial establishments are to a large extent low-wage jobs.”
If new businesses are going to be the answer to our economic malaise, remember that about half of all new businesses fail within the first five years.
None of this should be news. In July 2010, former Intel CEO Andy Grove wrote an article for Bloomberg Businessweek in which he noted that, “employment in the U.S. computer industry is about 166,000 lower than it was before the first PC … was assembled in 1975.” Grove argued that new industries and high technology would not solve the country’s employment and wage problems. He also wrote that most of the jobs created by companies such as Apple and Amazon were either relatively low-paid warehouse jobs, retail jobs, or jobs that were being created offshore. He pointed out that Chinese contract manufacturer Foxconn then employed more people than “the combined worldwide headcount of Apple, Dell, Microsoft, Hewlett-Packard, Intel, and Sony.”
That same month, I met David Stockman, President Reagan’s former budget director, at the Aspen Ideas Festival. At a cocktail party discussing Grove’s recent piece, Stockman commented that from 2000 to 2007, a period of recovery from a recession and a time characterized by reasonably strong economic growth, the only employment sectors that had added significant numbers of employees were education and health care, both, of course, funded by the government. BLS data support his observations, and with few exceptions such as computer programmers and systems engineers, the trends Stockman noted five years ago continue to the present.
The inescapable conclusion from these and many other similar facts: new industries, new companies, and free agent employment relations are not going to fix the stagnant median incomes, growing income inequality, and persistent unemployment issues confronting the U.S. economy.
What might help?
As economist Paul Krugman ceaseless, but correctly, points out, the strong U.S. recovery from the recent severe economic recession is mostly a product of fiscal stimulus—government spending—and low interest rates. Macroeconomic policy clearly matters for stimulating insufficient aggregate demand and spurring investment through low interest rates. Governments need to continue to try to get macroeconomics right.
Second, as Grove suggested five years ago, the U.S. needs to rebuild what he referred to as “our industrial commons”—the system of incentives and taxes that develops technology and also encourages companies to scale that technology in ways that benefit U.S. labor markets.
And third, we may be able to learn an important lesson from one of America’s favorite pastimes, live professional sports. Over the years, free agency has come to baseball, football, and basketball, permitting players, particularly stars, to maximize their lifetime earnings by moving from team to team and by entering into long-term contracts that provide financial insurance in case of injury or deteriorating skills. That part of the story is well known.
What is less often acknowledged or appreciated is the role of the professional players’ associations—the, pardon the expression, unions—that have bargained to obtain a higher proportion of total revenues for their members, higher minimum pay levels, and better economic and injury protections for players. Even free agent stars understand that, collectively, they will do better if they work as part of a group rather than just negotiating completely on their own.
This lesson in the importance of collective action is a story now being repeated as many fast-food employees press for higher wages. It may be coincidence, but the stagnating wages of workers, not just in the U.S. but globally, has occurred at the same time that union density—the proportion of the labor force covered by collective bargaining—has declined around the world.
It’s election time again. We are going to be hearing a lot about jobs, the middle class, stagnating wages, economic insecurity, and the many issues confronting American workers. When politicians, or, for that matter, business leaders or policy experts, put forth their favorite remedies, let me offer some practical advice: spend some time looking at the facts. And then ask whether entrepreneurship and new businesses, free agent workers with no job benefits or protections, or any other new employment arrangement is going to make any discernible difference.
I fully understand why, in a deadlocked, toxic political era, with Congress enjoying little public confidence, it is tempting to believe that we can indeed pull ourselves up by our bootstraps—maybe by delivering more packages or groceries, working more hours, or starting a new business. But there is no evidence that any of these solutions will work for most, or maybe even many, people.
I wish it weren’t true, but as Andy Grove, among others, noted almost a half-decade ago, improving incomes and working conditions for U.S. employees will take more than just individual initiative and hard work, important as those things may be. Change in the labor market and its consequences for human beings will require public policies that make jobs—good jobs, jobs that provide a decent wage, jobs that provide some sense of security for the future—as much of a priority as profits and economic growth.
The post Startups and Uber-like jobs will not solve America’s employment woes appeared first on Jeffrey Pfeffer.
June 9, 2015
The reputation economy and its discontents
A few years ago, when I had a hideously bad auto repair experience, I posted a negative rating on Yelp. And then soon the rating disappeared from the first page as positive ratings poured in. That experience made me suspicious about what has come to be called the reputation or ratings economy.
Over the ensuing years, ratings and ratings websites have proliferated. Everyone and everything from mental health providers to, as Times columnist Maureen Dowd humorously noted when she had trouble getting a ride, Uber passengers now get rated.
Curious about how and whether the ratings game was a good thing or not, I did a deep dive into this world and quickly discovered many problems with the reputation economy. Here is what I learned.
Ratings matter
Michael Fertik, the founder of Reputation.com (originally called ReputationDefender), has built a huge business on the fact that ratings and reputations matter and that most people and companies understand that. His company, started in 2006, “has curated the online reputation of 1.6 million customers who pay … to have their most flattering activities showcased to the world via search engines,” The Guardian reported. A person’s reputation—whether accurate, manufactured, or some combination of the two—can have an impact on job prospects and the ability to raise capital for startups. And people’s social status affects their marriage prospects and partners.
Ratings profoundly affect consumer choice. One survey of more than 1,000 people reported that two-thirds of respondents read online reviews, that 90% of customers who accessed reviews said that their buying decisions were influenced by positive reviews, and 86% said that negative reviews influenced their choices. The scholarly literature concurs with the importance of consumer ratings. One article noted that, “consumer reviews have been shown to predict purchasing decisions … to drive further consumer ratings … and to have more influence than expert reviews.” Moreover, that same piece stated that, “sales figures increase as a function of product ratings rather than the quality of the product.”
But are they accurate?
The potent influence of consumer ratings raises the question: how accurate are these ratings that so powerfully affect judgment and decision-making? The answer to this question depends on what you mean by accurate.
Consider three examples that vary both in the importance of selecting the right provider and also in the extent to which there are objective criteria of performance.
Doctors
There’s probably nothing more important than getting the best possible medical treatment. Medical outcomes, ranging from the degree of improvement in a person’s illness to the frequency of iatrogenic (medical-treatment caused) illness, are observable. You’d expect consumers to be fairly accurate in assessing the quality of the care they receive. But they aren’t.
Consumer’s Checkbook, a membership-subscription organization that operates in several metropolitan areas, including San Francisco, asks consumers to rate primary care physicians. Checkbook also surveys practicing physicians for their nominations of the best doctors in various specialties, including primary care. The organization, which accepts no advertising, also performs its own physician quality ratings.
Of the 104 top-rated primary care doctors as assessed by patients in 2014, just 17 were nominated as the best by their medical peers. And barely 60% of the doctors rated highest by patients were top-rated by Checkbook.
Teachers
Then there are those ubiquitous teacher ratings, particularly of college professors. For decades, higher education institutions have used student surveys as part of the faculty evaluation process, and now most places mandate end-of-course student evaluations. If, like me, you believe that the fundamental job of a teacher is to teach—to impart knowledge that students learn and retain—as contrasted, for instance, with providing entertainment or becoming students’ best friends, then it seems reasonable to measure accuracy by examining the relationship between teacher ratings and what students learn through an objective measurement.
The good news is that teacher ratings have been done for a long time and there are numerous studies of the relationship between student evaluations and learning. The bad news is that student course evaluations do not have any relationship with objective measures of what students have learned—a fact that has been known for more than four decades. For instance, one paper, published in 1972, studied 293 undergraduates in a calculus course and found that, “Instructors with the lowest subjective ratings received the highest objective scores.” The fact that student ratings do not offer any valuable insight on how well students learn has not affected the prevalence and use of the ratings.
Restaurants
Restaurant quality and the dining experience are both more subjective and also have fewer consequences than choosing the right doctor or getting a good teacher. Michelin has, since 1926, employed anonymous, knowledgeable, experienced experts to go to cities all over the world and find the very best places to eat. We can compare how Michelin rates restaurants with the same restaurants’ ratings made by the general public on sites such as TripAdvisor.
I selected two cities, San Francisco, near where I live, and Barcelona, a place my wife and I recently visited. I looked at the 2015 Michelin lists of the places that earned stars (in San Francisco, I considered only establishments located in the city itself) and also ratings on TripAdvisor. Here’s what I found.
Barcelona has 21 one- or two-star Michelin restaurants. Of the Michelin-rated establishments, presumably the very best in the city, only one is in TripAdvisor’s top 10, only 2 are in the top 50, and only 7 of the 21 ranked in TripAdvisor’s top 100. Nectari, with 1 Michelin star, ranks 2,262 on TripAdvisor, and Enoteca ranks 1,333.
Diners/raters in San Francisco agree with Michelin only slightly more. Of San Francisco’s 24 Michelin-starred restaurants, one, Gary Danko, is in TripAdvisor’s top 10, but 6 are in the top 50. However, Coi, one of four places in the entire Bay Area that earned two Michelin stars, ranks just 562 on TripAdvisor.
At least for these three domains, and quite possibly many others, ratings by consumers—of restaurants, academic instruction, or medical services—are quite uncorrelated with either expert opinion or objective measures of performance. This fact, of course, is precisely why companies in the reputation management space can be successful—reputations can be “managed” in the best and worst sense of that term, regardless of actual quality.
Why ratings encourage the wrong behaviors
Because ratings, and the reputations those ratings create, have economic consequences, there are, unsurprisingly, substantial incentives to game the system. One increasingly common way of gaming the system entails hiring people (or developing software, which is fortunately easier to detect and prevent) to post inauthentic reviews. One study estimated that 16% of the restaurant reviews on Yelp were fraudulent, that fraudulent reviews were more extreme, and that restaurants with weak reputations were more likely to commit review fraud. A 2012 study by IT research firm Gartner estimated that 15% of online reviews were fake. In 2013, New York State’s attorney general “announced a deal with 19 businesses that agreed to stop writing fake reviews.”
Numerous websites pop up (and then disappear) offering to hire people to write positive reviews about you and negative reviews about your competitors. Online purchasing is supposed to give customers access to informative reviews before they make a purchase decision. Maintaining the integrity of these reviews is economically important. Not surprisingly, then, both Amazon.com and Yelp have been increasingly aggressive in their attempts to build algorithms that weed out fake reviews and also to initiate legal action against their perpetrators.
Adi Bittan, a former Stanford MBA student and co-founder of OwnerListens, told me that there were two types of strategies that companies used: “white hat” and “black hat” approaches. “White-hat” strategies entail moves such as figuring out who your most satisfied customers are and then encouraging them—and even making it easier for them—to write reviews on popular websites. “Black-hat” strategies involve disparaging competitors, or maybe even future competitors. In one particularly notorious and well-known example, Chicago celebrity chef Graham Elliot’s “highly anticipated and oft-delayed gourmet sandwich/soft serve shop” got a 1-star review on Yelp from a prospective patron who said his “otherwise pleasant walk” was ruined by going to the establishment and finding that it was closed. The café had not even opened its doors for business at that point. Elliot, whose opinions of Yelp are essentially unprintable, took this as an example of how bad reviews are.
There are more problems with the reputation economy beyond just manipulated and inaccurate ratings. The prospect of customer reviews can induce behaviors designed to increase customer ratings in ways that are not useful and are sometimes harmful.
Returning to teacher ratings, there is a common belief, supported by at least some evidence, that one way to achieve higher ratings is for instructors to give the students who are doing the ratings higher grades. This belief produces the now-endemic grade inflation in higher education and also makes grades less meaningful as indicators of student achievement or ability. It’s unclear if higher grades produce higher teacher ratings, but the belief that this relationship holds nonetheless affects instructor behavior.
This behavior is all about reciprocity—I help you out (for instance, by giving you a good grade) and you help me out (for instance, by giving me a high rating)—and the natural human tendency to be nice and the associated desire to not be perceived as negative or difficult. These ideas call into question what happens when, like with teachers or Uber drivers, you have counterparties in a transaction rating each other.
An article in TechCrunch noted that eBay dispensed with reciprocal reviews in 2008 and also reported on a study that found that the identical property was rated 14% higher on Airbnb (that uses reciprocal ratings) than on TripAdvisor, which does not. That same piece noted: “People want to look good in social settings in which people’s identities are not anonymous, people tend to shy away from saying bad things because they don’t want to be the one who seems like a constant complainer or never-ending nagger.” The average Uber driver score is too high, according to Bittan, who believes that reciprocal reviews create incentives for being overly positive to get a positive review in return.
And there are more serious problems than just giving higher grades or higher ratings to encourage others to help you out in return. Doctors seeking higher patient ratings are more willing to order (unnecessary) diagnostic tests or to prescribe antibiotics or potent painkillers even when not needed or helpful, particularly if patients request them. In other words, reviews or the prospect of being reviewed changes treatment: “In a 2012 survey by the South Carolina Medical Association, half of the physicians surveyed said that pressure to improve patient satisfaction led them to inappropriately prescribe antibiotics or narcotics.” It would be interesting to see if there is a relationship, both over time and across settings, between the prevalence of patient reviews and the growing problem of opiate abuse.
Is there any way out of this problem?
Cheating, particularly in its extreme or least sophisticated forms, can be detected statistically, albeit imperfectly. Economists Brian Jacobs and Steven Levitt, in a famous paper, showed that “unexpected test score fluctuations and suspicious patterns of answers” could be used to detect teacher cheating to artificially raise their students’ scores. As I noted above, Yelp, Amazon, and Google, among others, are all working to try to eliminate fake reviews, including by building algorithms to highlight suspicious activity.
Amazon’s verified purchaser identification of reviews and related strategies help to raise the cost and difficulty of flooding sites with bogus information.
The world of assessing job candidates and doing performance appraisals, both forms of rating, offer another useful solution: provide standardized product or service dimensions for evaluation. One reason Michelin and diners’ ratings differ is that the Michelin employees have a more standardized set of criteria to evaluate restaurants and a process to ensure that those standards are used.
Bittan, whose company was established to help provide businesses of all sizes with real-time customer feedback, preemptively solve service issues, and head off negative reviews, made two other suggestions. She noted that people are less likely to engage in deception if they can’t do so anonymously, so requiring people to identify who they are might help. And she noted that, for many obvious reasons, your friends and even acquaintances are more likely to provide useful and honest information than are others. However, in this regard, “some data show that a good majority of people in North America believe and trust online reviews more than they trust their friends’ opinions.” Bad decision.
Certainly don’t rely solely on the most recent reviews or the most prominent online search results. Most people are cognitively lazy, looking only at summaries and a few, recent reviews, and that’s precisely the behavior that reputation management of any form counts on. So drowning negative reviews in an ocean of positive ones is the simplest and, ironically, the easiest way to detect reputation management games. You can also read (or program a computer to read) the most positive and negative reviews to see if many of the reviews use similar language, a possible but not perfect indicator that they are fake or managed.
In the end, if social capital is truly like money, preventing counterfeiting is going to become increasingly important. Just as in the case of money, there is an arms race between those seeking to prevent counterfeit ratings and those who seek to profit from the fact that reputations can be “manufactured,” or at least managed. And as the economic implications of ratings grow, the temptations to cheat will increase proportionately.
The wonderful world of the reputation economy is far from completely wonderful—or even honest. Therefore, to the extent possible, you might be better off relying on unbiased expert opinions if you can find them. And you can in many domains, although sometimes you might have to pay. Many newspapers publish best restaurant lists, and numerous organizations, including Consumer Reports and Checkbook, seek to provide unbiased reviews of all types of products and service providers.
Expert opinion can be bought and sold, too, but experts have more to lose and have more of their social identity tied up in their unbiased expertise than the people selling their ratings on some website, or maybe even than the companies who manage reputations for a profit. And don’t let the ready availability of summary scores induce you to not expend sufficient effort on discerning the best from the rest. In the reputation economy, too, “let the buyer beware” is a useful guideline.
(This post was originally published on Fortune on June 2, 2015)
The post The reputation economy and its discontents appeared first on Jeffrey Pfeffer.
April 14, 2015
Is your employer killing you?
McDonald’s recent decision to raise the pay for workers at company-owned restaurants to an average of $9.90 an hour and provide employees, once they have worked a year, some paid time off made news for what that action says about the tightening labor market and the campaign to get low-paid people a living wage.
But pay levels and other working conditions such as vacation and paid sick days affect more than just standards of living. People spend a lot of their time at work and, unsurprisingly, what happens in the workplace profoundly influences people’s mental and physical health. So if you think your job may be killing you, recent research suggests you just might be right.
It’s not simply overwork, which the Japanese call Karoshi and the Chinese call Guolaosi, that is a problem, although excessive work hours do adversely affect health. Aside from the numerous colorful cases such as the Merrill Lynch intern working in London who collapsed and died after working 72 hours straight, a report from the Chinese Labour Bureau cited an estimate that one million Chinese die from overwork each year, and a study of California employees reported a positive relationship between work hours and self-reported high blood pressure.
After a Harvard Business School graduate told me about going on antidepressants within a few weeks of starting work at a high-tech company, a senior person who worked at an organization providing health care described how she and colleagues coped with workplace stress by becoming addicted to alcohol and drugs. Another individual from television news related that she gained 60 pounds after she took on a new, more stressful job with more travel. So I decided to find out if there was systematic evidence that work environments really can be hazardous to people’s health.
Harvard Business School Professor Joel Goh, my Stanford colleague Stefanos Zenios, and I conducted a meta-analysis—a statistical procedure that analytically combines the results of, in this case, more than 200 studies—to explore the effects of 10 workplace conditions on four health outcomes: mortality (death), having a physician-diagnosed medical condition such as diabetes, cardiovascular disease, or other illness, and self-reported physical and mental health problems. Research shows that self-reported physical health predicts subsequent mortality and illness.
The 10 workplace conditions included some that affected people’s level of stress, such as work-family conflict, economic insecurity (fearing for one’s job and income), shift work, long working hours, low levels of organizational justice (fairness), an absence of control over one’s work, and high job demands—and one factor, whether the employer provided health insurance, that, particularly prior to the passage of the affordable care act, affected people’s access to health care.
Unsurprisingly, extensive epidemiological evidence shows that stress has both a direct effect on health and also affects individual behaviors such as smoking, overeating, drug abuse, and alcohol consumption that in turn affect an individual’s health and mortality.
Our summary of existing research shows that job insecurity increases the odds of reporting poor physical health by about 50%, high job demands raise the odds of having a physician-diagnosed illness by 35%, and long work hours increases mortality by approximately 20%. To put these and the many other results reported in the paper in some perspective, we compared the health effects of harmful workplace practices to that of second-hand smoke, a known carcinogen and risk factor for cardiovascular disease. In almost all cases, the health effects of the individual workplace exposures were comparable in size to the effects of second-hand smoke on health and mortality.
We used these results, along with estimates from nationally representative survey data on the prevalence of the various harmful workplace conditions in the working population, to build a model that estimated aggregate mortality from these conditions. The results showed that approximately 120,000 deaths occurred per year from exposure to harmful workplace circumstances. Comparing this number to the leading causes of death as reported by the Centers for Disease Control shows that harmful workplaces, in aggregate, would be the sixth leading cause of death in the U.S., just behind accidents and strokes but greater than kidney disease, suicide, diabetes, and Alzheimer’s disease.
Three obvious implications emerge from these estimates and the enormous amount of epidemiological research showing the connection between workplace stress and health problems. The first concerns employers. Many organizations have instituted wellness programs to improve employee health. Self-insured workplaces recognize that they bear the cost of illness and all employers seek to reduce the productivity losses from time lost to sickness and from employees who are at work but not feeling well enough to do their best, a condition called presenteeism. The evidence on the effectiveness of these programs is mixed. Meanwhile, if employers intervened more directly to remedy workplace conditions that adversely affected health, they would reduce health costs and also increase productivity and performance.
Second, if public policy is truly concerned with preserving life, enhancing people’s well-being, and with reducing the U.S.’s very high health costs, the workplace would seem like a good place to focus more attention.
And third, if employees are concerned about their lifespan and physical and psychological wellbeing, they need to pay attention to their work environments and how much exposure they get to workplace-induced stress. Just as employees are rightfully concerned about aspects of physical safety and exposure to toxic chemicals at work, they should be equally vigilant in their choice of employer to assess the psychological environment in their workplace and whether they are excessively exposed to workplace stressors.
Similar to the case of environmental pollution, the costs incurred by toxic workplaces adversely affect bothindividuals and the companies for which they work. Which means it is in everyone’s interest to reduce the toll of harmful workplace practices.
(This post was originally published on Fortune on April 13, 2015)
The post Is your employer killing you? appeared first on Jeffrey Pfeffer.
March 27, 2015
Who are the world’s best leaders?
Fortune, like many publications, likes lists—the most powerful women, the best companies to work for, the most admired companies, and of course, the annual list of the world’s greatest leaders.
Although all rankings are invariably imperfect and subjective, figuring out who the best leaders are might be the most difficult task of all.
We love leaders and leadership. That’s because we ascribe leaders with all sorts of mythical powers to improve performance and change the world—a phenomenon that the late business school professor James Meindl referred to as the “romance of leadership.” It turns out that research on the effects of leaders is much more equivocal than the popular mythology might lead one to believe. That’s because leaders operate under constraints—the limits imposed by economic circumstances, history, and other people.
But never mind that. Leaders like to generate stories of their efficacy and greatness, and they have developed well-oiled machines to do just that. Remember the good old days, before the age of the celebrity CEO? Before leaders from all sectors didn’t feel it was necessary to have a public relations machine and maybe a ghostwriter or two pumping out books and other materials heralding their greatness?
Because people seek inspiration and prefer positive, uplifting stories, few do due diligence on the self-promotional presentations. So leaders in all sectors get away with a lot, until they don’t. Maybe the classic case was that of Ken Lay of Enron—both he and the company were widely admired by the media and management writers until revealed as a fraud.
This parade of self-promotion is rational for reasons beyond feeding leaders’ egos. Research shows that leader image has a nice, positive effect on CEO pay. For instance, Berkeley business school professor Barry Staw found that companies and leaders that adopted popular management techniques did not experience better economic performance, but leaders who embraced popular management fads earned more money. Another study found that, even with other factors that might affect CEO pay statistically controlled, the better the CEO’s reputation, the higher the stock-based compensation.
However, as Jim Collins noted in his best-selling Good to Great, the best leaders, what he called Level 5 leaders, have two characteristics: fierce determination coupled with being shy and self-effacing. By not seeking the limelight, these leaders let other people contribute and shine. And by not spending excessive amounts of time going to every possible conference and public event to promote their greatness, these leaders have the time to do the real work of helping their companies achieve outstanding performance. The leaders Collins identified in his research were unlikely to be on any “best leaders” lists because, for the most part, they operated under the radar.
Although good leaders are limited in what they can accomplish by their circumstances, bad leaders can create toxic, harmful work environments; situations that can create stress and negatively affect people’s physical and psychological well-being. As Harvard Business School professor Joel Goh and two colleagues (including me) reported, work environments have a profound effect on an individuals’ health, medical costs, and even lifespan. The study found that the elements of a harmful workplace—things such as excessive working hours, an absence of job control, work-family conflict, and economic insecurity, including layoffs—were estimated to cause 120,000 excess deaths per year in the U.S. and contribute almost $200 billion to health care costs.
Based on these findings, I’d suggest one criterion to evaluate leaders: their effect on human health and mortality as well as psychological well-being. Gallup and Healthways have teamed up to do daily surveys of well-being. The companies recently reported rankings by state—one interesting way to assess the performance of political leaders.
Based on well-being and mortality, at least two of the leaders on Fortune’s 2014 list are possibly problematic choices. Alan Mulally, former CEO of Ford, certainly produced great economic results for the company, leading it through the recession without going into bankruptcy. But Ford has a two-tier wage system that relegates about 28% of its workforce to a lower salary range, about 31% less, than more senior employees not subjected to the wage concessions. And Ford continues to lay off its employees, including getting rid of some 100 people by robocall. Both low wages and layoffs contribute to poor health and psychological stress.
And then there is Pope Francis. While the new pope is a breath of fresh air and reform, the Catholic church remains doctrinally opposed to birth control, a position that is at least somewhat logically inconsistent with its pro-life platform. As reported by the World Health Organization, 289,000 women died during and following pregnancy and childbirth in 2013—almost 800 women per day. That same report noted that “maternal mortality is highest for adolescent girls under 15 years old and complications in pregnancy and childbirth are the leading causes of death among adolescent girls in developing countries.” The policy implications of such data seem clear, particularly when it comes to helping teenagers avoid accidental pregnancies that leave them at risk of death and disease.
The greatest leaders are the ones who run places that care for their employees; people like James Goodnight, co-founder and CEO of software company SAS Institute, which has an employee with the title of Chief Health Officer. The greatest leaders are also those who run organizations whose mission and purpose entails caring for people’s health and well-being. One such person is Amir Dan Rubin, CEO of Stanford Healthcare. The mission adopted after Rubin took the job in late 2010: healing humanity through science and compassion, one person at a time.
If more leaders at all organizations shared the objective of healing humanity, both physically and psychologically, and making decisions that incorporated well-being and not exclusively economics—something that all leaders could do, regardless of their level or sector or location—we would all be in much better shape.
(This post was originally published on Fortune on March 26, 2015)
The post Who are the world’s best leaders? appeared first on Jeffrey Pfeffer.
March 20, 2015
What Rebekah Brooks can teach us about power
Rebekah Brooks, former editor of the U.K.’s The News of the World and The Sun, confidante of British Prime Minister David Cameron, and a favorite of Rupert Murdoch, is back. Acquitted of phone hacking and three other charges last summer, recent reports suggest that she may be returning to News Corp to take on an executive role, leading the media company’s digital initiatives.
Profiles of Brooks, who rose from being a secretary to one of the most powerful people in British journalism, invariably mention her toughness, willingness to bend if not break the rules, her networking ability, her capacity to manage up, and her combination of “charm, effrontery, audacity, and tenacity.” She displays confidence and is not afraid to use profanity and exact revenge against those who cross her. In short, she seems to behave in ways that defy common stereotypes about women.
Brooks sounds a lot like Martha Stewart, who bounced back from an insider trading conviction to become one of the hottest selling brands, pursued by both J. C. Penney and Macys. Stewart, who is often described as “cold and distant,” also eschews attempts to be likeable and is notoriously hard on her employees. She, like Brooks, is said to have enormous self-confidence and “provides a refreshingly clear path to success: work hard, know your value, and have enough confidence in your work and value to keep pressing forward whether or not people seem to like you.”
During the course I teach at Stanford on power, people ask me about this advice all the time. That’s because the common media narrative is that “when women speak up in negotiations or other meetings, they are often penalized for doing so,” a position used to criticize technology researcher and commentator Vivek Wadhwa and his recommendations that women display confidence in work situations. The common takeaways from much of the research on this issue are wrong. Here’s why.
I haven’t read every single study, but I have read many of them, including the one recently cited by New York Times columnist Farhad Manjoo. These studies all have the same problem: the people being asked to evaluate others face no consequences for their decisions. For instance, people are presented written or video materials and then asked whether or not they would hypothetically be willing to work with the other person or their evaluation of that person. But the study participants don’t confront a situation of outcome interdependence. By that I mean they don’t have to actually work with the person they are evaluating. They are not in a position in which the study participants’ fortunes depend at least partly on the behavior of the person being evaluated.
In the real world, outcome interdependence is common. If I choose a subordinate, select an advisor, or help pick a co-worker or teammate, my own outcomes depend on the skill and drive of the person selected. Absent that outcome interdependence, I am much more likely to evaluate others on their likeability, which is partly determined by how they conform to role expectations, including gender role expectations.
A Stanford doctoral student, Peter Belmi, and I have a manuscript in preparation summarizing three studies showing this effect: outcome dependence changes how people weigh competence versus likeability when evaluating others. Likeability is more important when study participants’ outcomes don’t depend on the other person, and competence becomes comparatively more important when they do.
So, the next time you read about a study showing that anger or being demanding penalizes women, look at the details. If the study participants faced no real consequences for their choices, of course they preferred more likeable and less role-discrepant behavior.
But when people need to get things done—in journalism, or retailing, or, for that matter, running a large corporation—they choose people who can make things happen, whether or not the people are nice or fit some stereotype.
That’s the lesson to learn from watching the comebacks of Rebekah Brooks, Martha Stewart, and the many other women who don’t conform to what they are supposed to be and do. They don’t have to. Their ability to get stuff done overrides other considerations.
(This post was originally published on Fortune on March 6, 2015)
The post What Rebekah Brooks can teach us about power appeared first on Jeffrey Pfeffer.
February 26, 2015
Why powerful people are rarely punished appropriately
Justine Sacco, Lindsey Stone, and Alicia Ann Lynch all lost their jobs and suffered tremendous emotional distress from the vitriol unleashed in the social media world on each of them. Their “crime”: some tasteless posts on Twitter or Facebook.
Around the same day that several stories of Internet harmsurfaced, we learned that Dominique Strauss-Kahn—who after being accused of assaulting a New York hotel maid was allowed to address the French parliament and was appointed to a bank’s board of directors—was about to have charges dismissed related to his participation in orgies involving prostitutes. At Stanford University, since 1997, only 25 sexual assault or harassment caseshave made it through the university’s disciplinary process, with just 10 students found culpable, only one of whom was expelled. People who post inappropriate social media posts appear to face more opprobrium than those who engage in sexual assault.
Powerful executives and public figures often face surprisingly few consequences from actions that can cost companies billions of dollars and thousands of employees their jobs. One study of directors found that people who had served on the boards of banks that required government assistance during the financial crisis confronted turnover imperceptibly higher than peers who had served on banks that had weathered the financial storm in better shape. New York Times columnist Andrew Ross Sorkin detailed the many directors who maintained their positions even after presiding over financial disasters—Stan O’Neal, former CEO of Merrill Lynch who was on the board of Alcoa; Charles Prince of Citigroup, who was on the boards of Xerox and Johnson and Johnson; and James Johnson, the former chief executive of troubled mortgage giant Fannie Mae, who remained on the boards of Target and Goldman Sachs.
How can we make sense of the imbalance between the severity of the punishment people face, or the lack thereof, and the damage those individuals caused or at least oversaw?
Social ties and power
Social psychology teaches us that we tend to like people (and things) that remind us of ourselves. We bond more quickly and closely with strangers who mimic our words, intonations, and behaviors, creating a “current of good will between two people.”We are more likely to do favors for strangers who share even incidental similarities, such as birthdays, fingerprint patterns, personality scores, or first names, all of which create a “unit relationship,” a sense of association and shared social identity.
These facts make organizational self-regulation inherently problematic. Stanford enforces its standards through a board of judicial affairs that consists of 15 people drawn from faculty, staff, and students, an arrangement typical across universities. Boards of directors share social ties and experiences, and are likely to see board members even from other companies as somewhat similar. Expecting harsh punishment and sanctions from others who share social identities and affiliations flies in the face of what we know about interpersonal behavior—and unsurprisingly, harsh sanctions are rare in these cases.
We also know that people like to “bask in reflected glory,” to identify and be associated with winners, with the powerful, with rich and successful people or organizations. Consequently, we are more likely to make excuses for the powerful, or at least excuse our continued association with them, almost regardless of their behavior. When Dominique Strauss-Kahn appeared before the French parliament, one senator said, “His intellectual qualities are not at stake. We lack very good specialists on banks and tax evasion on the international level.” Almost inevitably, standards and sanctions differ depending on the power of the accused.
Fairly enforcing norms and standards
So, how do we develop processes that ensure that sanctions are more aligned with the severity of the harm someone inflicts?
First, we might try something that few people seem to do these days—ask how much harm a particular action has caused. If the behavior is tasteless but largely harmless, then instead of mounting some social media screed, leave the person alone. Unfocused, misdirected, and disproportionate anger isn’t good for anyone.
Second, because people are unlikely to take stern action against those who are like them, we should turn to outsiders who possess objectivity and don’t have social ties or similarity to the people accused. In the American judicial system, people are judged by juries of their peers, not juries of their friends or others who share social identities, such as employers, with them—and for good reason. The lack of quasi-judicial proceedings involving independent judges on college campuses and in corporate boardrooms makes it much harder to properly mete out punishments.
Third, because we know that there is a bias that favors the powerful, producing justice requires measures to redress this predisposition. One step might be to acknowledge and remind observers of the psychological factors that favor the powerful as a means to inoculate against this tendency. Another step would be to, in more formal proceedings, limit the powerful individuals’ display of the accoutrements of their power.
A fourth solution derives from the well-known Gyges effect, the “disinhibition created by communications over the distances of the Internet,” of having people being able to comment on and make judgments of others anonymously. When people are accountable for their judgments and actions, they are more likely to behave responsibly. I have advocated that academic journal reviewers be identified and identifiable after the fact for precisely this reason. We need and deserve to know who sits in judgment.
None of these solutions is a panacea. But the disproportionate responses—both too much and too little punishment for infractions large and small—suggest we need to pay much more attention to this issue.
(This post was originally published on Fortune on February 19, 2015)
The post Why powerful people are rarely punished appropriately appeared first on Jeffrey Pfeffer.
February 16, 2015
The psychology behind the Brian Williams incident
In the last few days, the world has been surprised and seemingly horrified to learn that NBC news anchor Brian Williams embellished an account of an episode involving him in a helicopter in Iraq years ago. His career hangs by a thread as NBC News has opened an investigation into the incident, Williams has taken a temporary leave from the newscast, and as many people call for him to lose his job.
Without excusing the act of making stuff up, and while acknowledging that people in the news business face a special responsibility to be as accurate as possible, I find the media frenzy disheartening. All people, but particularly people in power, tell lies all the time, lies that often do a lot more damage to others than what has occurred with Williams’ self-enhancing puffery (just ask the mayor of Paris about the damage caused by the false Fox News story that Paris had Muslim-dominated “no go” areas).
If only we held everyone to such high standards. But we don’t.
I became interested in the topic of not telling the truth as part of my research for a book coming out this fall, Leadership B.S.: Fixing Workplaces and Careers One Truth at a Time. The book focuses on the major disconnect between the prescriptions and expectations for leaders on the one hand and the reality of leaders’ behavior on the other. For instance, leaders are supposed to tell the truth. But, of course, leaders make stuff up all the time and apparently suffer few consequences for doing so.
I set out to learn what we know about lying—which is quite a bit—and how this knowledge may help us understand and possibly change our behavior.
Lying is common
Social psychologists report that college students tell a couple of lies a day on average and the general public tells one. In fact, lying is such an everyday occurrence that people do not feel embarrassed or ashamed for engaging in the behavior. Studies of online dating profiles note that 81% of people “misrepresent their height, weight, or age in their profiles,” although people tend to be somewhat more honest about their age.
People lie on their resumes. Depending on the particular analysis, somewhere between 53% and 78% of resumes contain falsifications, the most common untruth being inflated salary claims. Executives fib on their resumes often enough that Fortune ran a story on the top 10 falsification incidents. Even Presidents lie, not just Bill Clinton about Monica Lewinsky, but even Abraham Lincoln. The story about George Washington confessing to his father that he chopped down the cherry tree to show that he could not tell a lie is, naturally enough, itself untrue.
People often tell untruths for benign reasons
People misrepresent their attributes on online dating sites and resumes to make themselves look better, not unlike the inflated story Williams told. Salespeople regularly misrepresent their company and competitors’ offerings to close deals. People lie to smooth over social relationships, even romantic and intimate interactions. Indeed, in a piece of pre-Valentine’s Day advice, author and professor Clancy Martin argued that “relationships last only if we don’t always say exactly what we’re thinking.”
Complete accuracy: It’s more difficult than it looks
Motivation affects our cognition and memory—and one of the most powerful motives is to feel and think better about ourselves. We are more likely to remember positive events than negative ones and to recall past occurrences with a more positive spin.
Even without the motive to think better about ourselves, research on eyewitness identification and witness reports of crimes and accidents reveals how frequently inaccurate recollections are. For instance, one study asked more than 70 convenience store clerks to identify from photographs two male customers who had been in their store two hours prior. The study participants were able to correctly identify the customer only about one-third of the time.
As people tell and re-tell incidents from their past, sometimes in the process embellishing or adding details, over time the ability to distinguish what the people have added to the story from what was really true diminishes. Author Ben Dolnick noted how “the things you write begin to blend with, and then replace, the things you experienced.”
Selective moral outrage accomplishes very little
I cannot think of any evidence or logic that suggests that making Williams an “example” by ending his career will change the realities about truthtelling that I have briefly summarized. Moreover, the blurring of the line between news and entertainment and the selective presentation of information on what are ostensibly news shows diminishes the validity of the idea that Williams committed some unforgivable sin. For instance, the Tampa Bay Times co-sponsored website, Politifact, has found that less than one-quarter of the statements made on Fox News are true or mostly true, and even CNN makes statements that are true or mostly true less than one-half of the time.
In the end, the punditry and reaction to Williams and the helicopter story are both simplistic and hypocritical; simplistic in that there is little consideration of the frequency of and psychological foundations for this sort of behavior, and hypocritical in that many of those bemoaning Williams’ conduct do similar things themselves, maybe even frequently. To take just one example, the number of Taliban militants who attacked the Navy Seals in the incident recounted in Lone Survivor apparently grew over time as author Marcus Luttrell told and retold the story.
If not telling the truth is, as ample evidence shows, common and almost inevitable, and frequently done for benign reasons, maybe we should all be a little more understanding—and possibly even forgiving—of Brian Williams. Or at a minimum, given how pervasive prevarication is, at least remember the admonition that people who live in glass houses should not throw stones.
(This post was originally published on Fortune on February 9, 2015)
The post The psychology behind the Brian Williams incident appeared first on Jeffrey Pfeffer.
Jeffrey Pfeffer's Blog
- Jeffrey Pfeffer's profile
- 292 followers
