Marina Gorbis's Blog, page 1535

October 11, 2013

Make Your Innovative Idea Seem Less Terrifying

Four years ago, Craig Hatkoff, co-founder of the Tribeca Film Festival, approached me about a brainstorm: an event recognizing and celebrating breakthrough innovators.  When I suggested to Clayton Christensen that we partner with Hatkoff to create the Tribeca Disruptive Innovation Awards, Clay’s response was: I trust you Whitney.  If you say we should, let’s do it.  In 2010, the first year, the event was fledgling, but charming.  By 2013, we had honored Jack Dorsey of Twitter, Garrett Camp of Uber, famed choreographer Twyla Tharp, and Gangnam style pop artist Psy.


How I wish that all my ideas received this kind of reception!  Rarely have I had that kind of immediate trust and social currency when proposing something new.  More often, I’ve experienced the opposite reaction:  what I consider genius ideas have been greeted with blank faces, disapproving stares, and occasionally the outright smackdown.


New ideas tend to evoke fear and anger – we are programmed to prefer the comfort and safety of established norms. Much as I want to believe that a glaringly good idea will stand on its merits, I have come to realize that just like any product or service, ideas require good marketing if they’re going to reach their intended customers.


Potential customers for our ideas have a predilection for thinking more about what they are already thinking, of scaling the learning curve they are already on.   When it comes to embracing a new idea, most will demur unless you can pack a parachute that will allow them to jump safely from their S-curve to yours.  You create this parachute using convincing data, demonstrating your own competence, speaking their language, and socializing your idea to overcome the ever-present fear factor. This becomes especially important within a large organization where innovation is often perceived as a battle:  the heroic disruptive David against the oafish bureaucratic Goliath, or a spy game requiring stealth.


Celine Schillinger sought to change the leadership landscape of Sanofi, a major pharmaceutical company.  She loved her job, and with a background in public affairs in communications had been successful in both international business and management roles.  But as she began to consider her future at the company, she realized that all of the people above her were white, male engineers or accountants.  She also believed that Sanofi’s competitive edge was at risk because of this narrow approach to talent management.


So she wrote a memo to the CEO explaining why gender balance is good for business.  Initially there was no buy-in.  But when her e-mail unexpectedly went viral after she’d shared it with a few colleagues, Schillinger became the leader of what has come to be known as WoMen in Sanofi Pasteur (WiSP), now the largest network across Sanofi with 2,500 members in fifty countries.  This might have backfired with an executive team that wasn’t as competent and as open to discussion, or if Sanofi had gone about it in a different way — in either case, they could have reacted as if she were going behind their backs. But both her tactful socialization of the idea and its contagious effect were self-validating. As the idea gained grassroots traction, the risks of buying in fell for senior management even as their respect for her expert stakeholder communication skills rose. This led to the HR VP brokering a meeting with the CEO and an invitation to make a formal presentation to the Executive Committee.  By eliminating the heightened sense of risk inherent in new ideas, Schillinger offered a parachute for potential stakeholders to jump into the unknown. Today, she’s the Head of Stakeholder Engagement for their in-development Dengue Fever vaccine — one of Sanofi’s largest business initiatives.


Or consider Scott Heimendinger, who jumped from the role of program manager on the Excel team at Microsoft to the director of applied research for Modernist Cuisine, a company dedicated to advancing the state of the culinary arts through the creative application of scientific knowledge and experimental techniques.   Making the leap to a new career curve is a bold idea that also needs to be sold, and the importance of mitigating risk for the key decisionmaker — the prospective employer — holds true.   Because Modernist Cuisine founder Nathan Myhrvold was a former Microsoftee (their first CTO and the founder of Microsoft Research), Heimendinger immediately reduced the perceived risk of hiring him by speaking their shared Microsoft language.  But what really packed the parachute was Scott’s demonstrated competence at the Seattle Food Geek blog.  And like Schillinger, Scott also has the ability to socialize an idea;  he’s recently wrapped a successful Kickstarter campaign for Sansaire, a startup he co-founded to produce a $199 sous vide cooker.


According to the research on successful entrepreneurs, their single most important trait is the ability to persuade.  Whether you’re an entrepreneur or an intrapreneur, unless your boss is as comfortable with disruption as Clay Christensen is, your ability to persuade is tightly linked to your ability to assuage fear. To get buy-in for any new idea, whether your customer is your manager, your direct reports, your teenage son, the CEO, de-risking is essential.  The ability to jump to a new vision or product or job almost always requires that those around us, our fellow stakeholders, also leap to a new curve of learning. If you’re looking for a break for your breakthrough ideas, prepare to skydive:  pack a parachute for you and your colleagues.



Executing on Innovation

An HBR Insight Center




Analysts Want You to Innovate, Except When They Don’t
When You’re Innovating, Think Inside the Box
How Good Management Stifles Breakthrough Innovation
Capturing the Innovation Mind-Set at Bally Technologies






 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2013 07:00

American Companies Should Stop Being Helicopter Parents

Why do American employers act like helicopter parents when it comes to their employees’ health care?


I’m sure you know helicopter parents – the kind who hover over their kids at all times to help them navigate their lives. This parenting style doesn’t typically produce children who can look out for themselves. The same applies to employer helicoptering. Most companies provide a very small number of health-plan options, with the result that workers don’t have to – and never learn to – make significant decisions about coverage.


I recently saw the benefits of greater personal accountability in Singapore, where I traveled as an Eisenhower Fellow. Way back in 1960 – when Singapore was still a developing nation focused on the most basic needs like housing and clean water – the government introduced user fees for clinic visits, recognizing that individuals need to feel invested in their health-care decisions.


Today all wage earners are required to put savings into an individual account to cover future health-care expenses, a scheme called Medisave. Although there are constraints on such things as minimum balances and applicable services, the money belongs to the individual, who decides how to spend it. Nearly a third of Singapore’s per-capita health-care spending comes directly out of Medisave accounts. While Health Savings Accounts have existed in the U.S. since 2003, only about 8% of Americans have one.


The genius in Singapore’s health-care system is that each individual’s stake in health-care financing is clearly visible. The former CEO of a major hospital group told me, “People think twice or three times about using services.” He assured me, though, that no one goes without. High-quality outcomes and indicators suggest that for the most part, he’s probably right.


Today Singapore spends only 4% of GDP on health care, versus 18% in the U.S., and just one-third of health care spending comes from the government, compared with 45% in the U.S.


Singaporeans get better value for their money, too. The country performs better than the U.S. on measures such as life expectancy, which has risen 10 years since 1980, and infant mortality, which is among the lowest in the world.


I don’t mean to suggest that U.S. businesses should stop providing health benefits. These benefits help attract and retain talent. However, businesses can assist their workers in becoming true consumers of their own health care by taking the following approaches:


Let employees do the shopping. Americans are quite good at shopping for most things, so why not let them shop for health insurance? Despite political rancor about health-care reform, a key feature – health insurance exchanges – provides employers new options. Under the Affordable Care Act, employers can send employees to an exchange to purchase health insurance. Employers pay the bill, but employees can choose their own plans from all or a subset of available plans. We’re all experts about our own needs and preferences, so it stands to reason that employees will make better decisions about which health plan is right for them.


Offer plans that align incentives, and help employees understand them. Copayments, deductibles, coinsurance, and tiered pricing are designed to give consumers incentives to make lower-cost decisions. But these features work only if people understand them. A recent survey conducted by a Massachusetts health plan revealed that more than half of respondents had no idea what coinsurance is. Among those on subsidized insurance, the proportion was 66%. When I mention that finding to people, they often say, “Actually, I don’t know what coinsurance is.” (It’s when the patient pays a percentage of the doctor or hospital charge, rather than a flat copayment).


Employers should educate employees about their cost-sharing responsibilities and help them find health-care providers who meet their needs and their budgets.


Push for transparency. Employees shouldn’t be expected to share in paying for health services without knowing the cost. As the primary payers, employers are well positioned to demand price transparency. To retain their customers, health plans will likely respond to those demands.


Encourage savings. Throughout Singapore, I heard about people’s tendency to “save for a rainy day” and avoid spending beyond their means. Singaporeans have, on average, the equivalent of more than $15,000 in their Medisave accounts. In 2011, the average American savings account had just $5,900 to cover a full range of household expenses.


Employers should encourage employees to save. They should make health-savings options available. They should nudge people by making those programs “opt-out” and try even small incentives to get people to save. Higher levels of savings would better enable employees to handle increased levels of financial responsibility and would give them a greater stake in their health-care spending.


There’s evidence that American companies are moving in this direction. Recently, Walgreens announced that it would begin offering benefits on a private health-insurance exchange, which would let employees make their own decisions about how to use health-care money. Indeed, one in four employers is contemplating moving to private exchanges.


Earlier this year, the Chicago Tribune did a piece about “free-range” parenting: Within a six-block area, kids are free to make their own decisions. Proponents cite benefits to children such as skill building in the areas of social decision-making, problem-solving, compromise, communication, and self-regulation.


So why not “free-range” health benefits? By providing education, tools, and encouragement, rather than telling employees what to do, employers may ultimately help drive down health costs. Even if cost reduction proves to be an elusive goal, consumers will be empowered to think for themselves about the most expensive benefit their employers provide.






 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2013 06:00

Sales Alert: Making Eye Contact May Not Be Such a Good Idea

After gazing at the eyes of speakers who were trying to persuade them, research participants showed an average attitude shift of just 0.14 on a seven-point scale, compared with 0.6 if they had stared at the speakers’ mouths, says a team led by Frances S. Chen of the University of British Columbia in Canada. This and another experiment show that contrary to popular belief, eye contact decreases the success of attempts at persuasion, at least in the cultural context of the European university where the study was conducted. Because direct gaze has evolved in many species to signal dominance, eye contact may provoke resistance to persuasion, the researchers suggest.






 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2013 05:30

Culture, Not Leverage, Made Wall Street Riskier

Over the summer, U.S. regulators announced new rules that would limit the leverage (ratio of debt or assets to equity) that the biggest U.S. banks can use in their business. The reasoning, backed by several respected scholars, is that leverage was a leading cause of the financial crisis. The argument makes intuitive sense, and leverage ratios provide a clear quantitative measure for regulators to monitor, so regulations have followed.


But leverage ratios aren’t the only risk factors that matter. Corporate incentives and culture may be even more important in explaining what changed on Wall Street in recent years, and by placing too much emphasis on quantitative ratios like leverage, we may be missing some other important parts of the problem.


I came to this conclusion after studying the culture of Goldman Sachs, where I previously worked for 12 years, as research for a sociology Ph.D. that has now grown into a book. Before then, I would have guessed that Goldman’s switch to becoming a public company had led it to take more risks, and that this would have been reflected in higher leverage over time. But I found out that this hypothesis was wrong — Goldman had been highly levered in its past as a private partnership, too.


The focus on finding something to blame (e.g. higher leverage) reminded me of the findings of the research into the Space Shuttle Challenger explosion done by one of my Columbia University sociology professors, Diane Vaughan. The loss of Challenger on Jan. 28, 1986 is usually blamed on a scientific design flaw in the O-rings used to seal parts of the spacecraft together. Vaughan, however, located the disaster’s roots in the nature of institutional life. Organizational characteristics — cultures, structures, politics, economic resources, their presence or absence, their allocation — put pressure on individuals to behave in deviant ways to achieve organizational goals. The design engineers kept taking incremental risks that they thought were acceptable and normalized them — until the disaster.


Leverage ratios might be the O-rings of the financial crisis. My research revealed that high leverage is nothing new for Wall Street firms, though the ratios have varied over time. In the early 1970s, for instance, the ratio of assets to equity for most firms was generally below 8-to-1. But in the 1950s, it sometimes exceeded 35-to-1. According to a 1992 study by the Government Accountability Office, the average leverage ratio for the top 13 investment banks was 27-to-1 during 1991 (up from 18-to-1 in 1990). At that 27-to-1 leverage ratio, only a 3.7% drop in asset prices would wipe out the equity of the bank. According to SEC filings, in 1998, the year before it went public, Goldman Sachs was leveraged at nearly 32-to-1, while in 2006 it was leveraged at 22-to-1. Other Wall Street firms have experienced similar leverage increases and decreases.


At Goldman Sachs, one element that was different in the lead up to the financial crisis was not the amount of leverage but the constraints and incentives faced by partners.  Before the IPO in 1999, partners of Goldman Sachs owned equity in a private partnership. When elected a partner, one was required to make a cash investment into the firm that was large enough to be material to one’s net worth. Each partner claimed a percentage ownership of the earnings every year, but it was a fixed percentage — limiting the incentives for risk-taking — and the majority of the earnings stayed in the firm. A partner’s annual cash compensation amounted to a small salary and a modest cash return on his or her capital account. A partner was not allowed to withdraw any capital from the firm until retirement, at which time the capital typically amounted to around 75% of one’s net worth. Even then, a retired partner could only withdraw his or her capital over a number of years. Finally, and perhaps most importantly, all partners had personal liability for the exposure of the firm, right down to their homes and cars. The result was an intense focus on risk, including risks related to ethical standards. As a partnership, each partner was financially interconnected with the others. They had to be very careful about their standards of behavior and the people they allowed into the partnership. One bad decision from a partner could cause all of them to face personal financial ruin.


Over time, though, Goldman Sachs grew from a relatively small group of financially interconnected partners to a publicly traded corporation in which compensation took the form of individual, predominantly discretionary performance bonuses plus stock that could be sold before retirement. The fixed percentages, financial interconnectedness, and personal liability are mostly gone. Relating back to Vaughan’s research on Challenger, organizational characteristics put pressure on individual behavior. The definition of acceptable risk and the consequences of the risk-taking changed over time.


This — not the leverage ratio, which was actually lower than it had been for most of the previous decade — was one of the key elements that made the Goldman Sachs of 2006 so different from the firm of the 1990s or 1980s or 1970s. It may be that cracking down on leverage is simply regulators’ crude way of trying to address issues of corporate incentives and culture. But leverage limits may have unintended consequences for capital markets’ competitiveness, innovation, growth, and efficiency. Understanding the potential problems related to culture, incentives, and pressures, and addressing them head-on, would make more sense.






 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2013 05:00

October 9, 2013

Why Kids — and Workers — Need to Get Their Hands Dirty

Children in the U.S. today interact much less with their physical environment than they used to. Few grow up building fences, designing go-karts or tinkering with their cars anymore; vocational high schools are all but closed.  What do kids today do instead? The Kaiser Family Foundation recently reported that 8- to 18-year-olds spend 53 hours a week engrossed in entertainment media.


So what? Who cares? Manufacturers, those companies that create physical products requiring a blend of high-tech electronics and physical components, do.  And if Americans want that sector of their economy to be strong, they should, too.


Take an aircraft maker like Boeing or a steel giant like Nucor.  When the engineers and operators they hire lack real-world building experience, the organization has to teach them. Sure, these young people can fashion incredible structures in Minecraft and design and test products digitally, but many are out of touch with the physical world, what we might think of as tactile intelligence. Many have no practiced knowledge about how metal or plastic bends, breaks, retains heat or burns, no practical understanding of how to limit size for fuel efficiency while allowing enough space for technicians to reach inside and connect components. If you haven’t physically handled and experimented with woods, metals, plastics, it’s difficult to imagine how to engineer an airplane wing that can, for example, keep bending to 140% of its maximum load without damage, and only fail beyond that.


Manufacturers are therefore faced with a daunting choice: outsource the design and testing to countries where tactile intelligence is still high, or fill the knowledge gap of new hires in the U.S. The first solution is problematic since competitive advantage in manufactured products often stems from design, and out-sourcing can endanger the retention of core capabilities and limit innovation. The second solution requires time and energy but we would argue that it’s worth the investment.


Boeing’s “Opportunities for New Engineers” program, for example, allows employees to physically create aerospace products. The challenge might be to build a wood/composite miniature airplane from design through development, building, testing, and flying. The finished product must meet stringent and practical requirements:


Dual Mission: Cargo/Reconnaissance
Weight: 20 lbs empty; Wingspan: 16 ft
Payload: 5 lb payload, 5 Watts supplied to payload
Endurance/Range: 10 hour full-sun/1 hour no-sun (cruise) endurance, 10 mile range
Single battery charge (full-daylight)
90% of (cruise) power from solar
Transportable by car.
Assemble by two people.
Recurring cost: $X
Non-recurring cost: $Y



With senior engineers as mentors, participants learn by doing. They see and feel how the parts physically fit or don’t.  They understand the touch and finesse needed to bend the wing and the physical strength of a thick versus a thin cross section.  They see where years of engineering theory clash with harsh realities.  And these experiences lead to a better understanding of design, which translates to better engineering. Forcing young engineers to understand the balance of cost and technical excellence helps drive many to find more efficient solutions.


The “Opportunities for New Engineers” program’s most ambitious project, however, is to Build, Certify and Fly (BCnF) a Glasair Super II airplane. A 32-person team of mostly engineers will do everything from procuring and installing the avionics package and power plant to developing and performing the flight test program. Once certified in early 2014, the airplane will be utilized by the Boeing Employees Flight Association (BEFA) during airshows as a flight demonstrator and for training purposes.


Expertise is built through practice. The more time our “digital native” kids spend on entertainment media, the more we lose the tactile intelligence critical to design and manufacture physical products.   So let’s encourage children to start physically building and tinkering again. Let’s encourage schools to let students dirty their hands in projects and experiments. Let’s close the gaps we already see in the next generation of workers. And let’s enable the tactile intelligence we need to remain competitive in our growing global marketplace.






 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 11:00

How to Listen When Your Communication Styles Don’t Match

Why do people who consider themselves good communicators often fail to actually hear each other? Often it’s due to a mismatch of styles: To someone who prefers to vent, someone who prefers to explain seems patronizing; explainers experience venters as volatile.


This is why so many of us see our conversational counterparts as lecturing, belaboring, talking down to us, or even shaming us (if we are venters and they are explainers) or as invasive, out of control, and overly emotional (if we’re an explainer and they’re a venter).


Facing this kind of mismatch, what do you think the chances are for either person actually listening with an open mind?


My answer is… very low.


It is tempting to say “zero,” but since it’s not possible (or even desirable) to work only with people who match your communication style, you need to develop the skill to try to listen around their communication style.


Listening around that style, however, can be incredibly effortful.  When someone is either venting/screaming or explaining/belaboring it triggers a part of your middle emotional brain called the amygdala, which desperately wants to hijack your attentive listening and instead react reflexively with whatever your hardwired reactions are.  And resisting that amygdala hijack is exhausting.


What do to with a venter/screamer


If your conversational counterpart is a venter/screamer, your hardwired survival coping skill might be to tell them to calm down (which will only make them more upset), to shut down and get silent (which will only make them yell longer, because they’ll think you’re not listening), or to try to point out how irrational venting is (which, as noted above, they will perceive as patronizing and belaboring).


Instead, say to yourself, “Okay, here comes another temper tantrum.  Just let them blow.  Try not to take it between the eyes and imagine you’re looking into the calm eye of a hurricane and the storm is going over your shoulder.”


To do this, focus on their left eye. The left eye is connected to the right brain — the emotional brain.  Let them finish. Then say, “I can see you’re really frustrated. To make sure I don’t add to that,  and to make sure I don’t miss something, what was the most important thing I need to do in the long term, what’s the critical thing I need to do in the short term, and what do I need to get done ASAP?” Reframing the conversation this way, after they’ve finished venting, will make sure that your “explainer” self knows what to do – instead of ignoring the venting as another random outburst from “Conan the Barbarian” or “the Wicked Witch of the West.” Chances are, they do have something important they’re trying to tell you – even though they’re not communicating it very well.


After they respond, say to them, “What you just said is way too important for me to have misunderstood a word, so I’m going to say it back to you to make sure I am on the same page with you. Here’s what I heard.” Then repeat exactly, word for word, what they said to you.  After you finish, say to them, “Did I get that right and if not, what did I miss?” Forcing them to listen to what you said they said, “because it was important,” will slow them down, will help you stay centered and in control, and will earn you their and your own respect.


What to do with an explainer/belaborer


If your conversational counterpart is an explainer, your hardwired survival coping skill might be to say to yourself,  “Here they go again, make sure you smile politely even if you want to pull your hair out. Try not to let your impatience and annoyance show.” The problem with this is that even though they may be oblivious to others as they go on and on, at some level they may be aware of your underlying impatience and… that might actually make them talk longer. Yikes.


Realize that the reason they explain and belabor things is probably because their experience is that people don’t pay attention to what they say.  They don’t realize that while that may be true of some truly distracted people, for others, the reason they don’t pay attention is that the speaker is belaboring something that the listener already heard — and doesn’t want to hear over and over again.  Another possibility is that these explainers may not be feeling listened to somewhere else in their life (by their spouse, kids, parents, or boss) and is now  relieved to have you as a captive audience.


When the explainer goes into his explanation/lecture/filibuster, say to yourself, “Okay, this is going to take a while.”  Put a mental bookmark in whatever you were working on. Then look them in their left eye with a look that says, “Okay, take your time, I’m fully listening.” Instead of feeling frustrated and reacting by become impatient and fidgety, remind yourself, “They need to do this. I can be patient.”


Then when they finish then apply a similar response to the venter/screamer with the following minor edit:


“I can see that you really had a lot that you had to say. To make sure I don’t miss something, what was the most important thing I need to do in the long term, what’s the critical thing I need to do in the short term, and what do I need to get done ASAP?” ”


After they respond to that, say to them, “What you just said is way too important for me to have misunderstood a word, so I’m going to say it back to you to make sure I am on the same page with you. Here’s what I heard.” Then repeat exactly, word for word, what they said to you.  After you finish, say to them, “Did I get that right, and if not, what did I miss?”


Your amygdala is probably saying to you and to me, “I don’t want to do either of those things.  These people are obnoxious and unreasonable. Why should I kowtow to them?”


Here are several reasons:



They aren’t likely to change. These are deeply ingrained personality traits.
Being more open and inviting them to talk rather than closed and resistant will lessen their need to act this way.  Listening patiently hath charm to soothe the savage (or boring) beast.
You will feel more self-respect and self-esteem. The above approaches will enable you to remain cool, calm, collected, centered and communicative in situation that formerly frustrated you and made you react poorly.





 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 10:00

Doubts About Pay-for-Performance in Health Care

While health spending in the United States far surpasses that in other industrialized nations, the quality of care in the US is no better overall, and on several measures it is worse. This stark fact has led to a wave of payment reforms that shift from rewarding volume (as fee for service does) to rewarding quality and efficiency. Such pay-for-performance schemes seem to be common sense and are now widely used by private payers and Medicare. But astonishingly, there’s little evidence that they actually improve quality.


What do we really know about the effectiveness of using financial incentives to improve quality and reduce costs in health care? There is robust evidence that health care providers respond to certain financial incentives: medical students have a higher demand for residencies in more lucrative specialties, physicians are more likely to order tests when they own the equipment, and hospitals seek to expand care for profitable services at the expense of unprofitable services. It would seem that increasing payment for high-quality care (and, conversely, lowering payment for low-quality care) is an obvious way to improve value in health care. But evidence suggests that health care is no different from other settings where similar payment incentives have been tried, such as education and private industry. Not only do these payment policies often fail to motivate the desired behaviors, they may also encourage cheating or other unintended responses.


Overall, evidence of the effectiveness of pay-for-performance in improving health care quality is mixed, without conclusive proof that these programs either succeed or fail. Some evaluations of pay-for-performance programs have found that they can modestly improve adherence to evidence-based practice.


There is little evidence, however, that these programs improve patient outcomes, suggesting that to the extent that health care providers have responded to pay-for-performance programs, that response has been narrowly focused on improving the measures for which they are rewarded — such as making sure patients receive recommended blood tests if they have diabetes or the right cocktail of medications if they are hospitalized with a heart attack. Although these measures are important for patient care, it may take a full reengineering of the health care delivery system to broadly improve patient outcomes.


Despite considerable concern about unintended consequences in these programs, so far the adverse effects have been relatively minor, with little evidence  that providers are avoiding high-risk or disadvantaged patients, gaming, or ignoring areas of care that are not financially rewarded in order to improve their apparent performance. The lack of evidence for unintended effects is perhaps not surprising, given the limited evidence of the intended effects of these programs, though concerns remain that as pay-for-performance incentives become stronger, and perhaps more effective, evidence of cheating may surface.


Given the wide adoption of pay-for-performance programs, it’s surprising is that even after extensive research, very little is known about how  their design — including what outcomes are rewarded, the optimal size of incentives, and the criteria for payment (e.g., quality achievement or quality improvement) — affects provider behavior. Also, because nearly all of the evidence comes from programs that reward quality, we know almost nothing about whether pay-for-performance can improve efficiency or lower cost.


Experience with pay-for-performance in health care and other settings shows that these programs are hard to design. The best combination of performance measures, organizational level of accountability, criteria for payment, and incentive size is not obvious, and unintended consequences are common. 


Pros and Cons for Pay-for-Performance in Health Care


To be effective, we may need smarter incentives that take advantage of the cognitive biases that skew decision making, such as loss aversion. For example, evidence from a randomized trial in Chicago schools found that student math scores improved if teachers were paid in advance and forced to repay bonuses if an improvement standard was not achieved.  Scores did not improve if teachers stood to receive incentives only after scores met improvement standards.


Other insights from psychology and behavioral economics may also have the potential to make pay-for-performance programs more effective.  For example, another possible reason for the disappointing response to these program in health care is that targeted provider behavior is more likely to be intrinsically motivated  (driven by the desire to reduce suffering, for example) and thus less likely to respond to external incentives such as payment. Examining the effects of pay-for-performance in other sectors underscores the difficulty of using performance pay for intrinsically motivated workers — and the pitfalls of trying.


Recent critiques of productivity pay claim that extrinsic incentives are only effective in situations in which tasks are routinized and narrowly defined, leaving workers with little intrinsic motivation, such as windshield installation. This view is supported by copious evidence from social science that financial rewards for intrinsically valuable activities – including performance in school, sports, and interesting work activities – undermine motivation and can decrease task performance. Perhaps not surprisingly, there is little evidence that pay-for-performance has been effective in U. S. education:  A major program in New York City – including 20,000 teachers and $75 million – proved to be a high-profile failure. Dan Ariely and colleagues have also argued that for professionals working in situations where there is uncertainty about the relationship between inputs (such as the choice of diagnostic tools, reperfusion therapies, and discharge planning for patients admitted with acute coronary syndrome) and outputs (such as 30-day mortality), performance contracts cannot be sufficiently detailed to reward optimal practice in all circumstances. As a result, pay-for-performance can divert attention from the big picture and toward a myopic focus on meeting the performance goals that are typically defined in these contracts. Thus, even if we had pay-for-performance programs with smarter designs, it remains unclear whether we could overcome the fundamental problems associated with incentive contracts directed at narrow goals for intrinsically motivated activities.


Pay-for-performance was brought to health care to address a real problem: the suboptimal quality of our health care given our levels of spending. In the face of perverse financial incentives, health care providers’ intrinsic motivation to deliver quality has not been enough to provide sufficiently high-quality, high-value care in the United States. The root of these problems, however, may lie in system failures, not the failures of individual providers. While health care providers want to help the patient in front of them, they may not feel obligated (or have the incentive) to solve system-level problems stemming from factors they feel are outside their control. One potential solution lies in broader health reform, such as global payment for populations rather than piece-rate bonuses for individual patients. Coupled with public quality reporting, global payment reform has the potential to expand the scope of provider accountability, take advantage of providers’ intrinsic motivation, and improve population health. Such efforts may hold more promise for value improvement in US health care than attempts to exploit providers’ extrinsic motivation through tweaks to fee-for-service payment.


Follow the Leading Health Care Innovation insight center on Twitter @HBRhealth. E-mail us at healtheditors@hbr.org, and sign up to receive updates here.



Leading Health Care Innovation

From the Editors of Harvard Business Review and the New England Journal of Medicine




Leading Health Care Innovation: Editor’s Welcome
Coaching Physicians to Become Leaders
A Global Online Network Lets Health Professionals Share Expertise
How to Design a Bundled Payment Around Value






 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 09:00

Don’t Let Them Steal Your Inventions

On March 18, 2010, an Apple engineer left what looked like an iPhone 3 in a German beer garden in Redwood City, California.  Another patron later picked it up from a barstool.  The next morning, the phone didn’t work (having been disabled remotely) but the finder realized the device looked a bit odd.  It had a camera in the front and the exterior felt different.  He was able to remove the exterior, revealing a shiny prototype for the new iPhone 4 – a product Apple wasn’t intending to announce for months.


Up until that March evening, Apple had been notoriously successful at concealing its new designs.  Like clockwork, it would wait until just before unveiling a new product design to file a corresponding design patent application.  For instance, Apple filed applications for the original iPhone only four days before it was announced in 2007; for the original iPod in 2001, the filing was one day before release.


The finder of the iPhone 4 tried calling Apple to return the phone, but no one called him back.  About a month later, he sold the device to a website, which disassembled it, took pictures, and posted them on the Internet.  By that time, Apple might have assumed the prototype was simply lost.  But after the photos were posted, its lawyers jumped into action.  That same day, Apple sent a letter to the website asking for its property back and filed a design patent application with the U.S. Patent and Trademark Office.  Filed at 11:55 pm that night, U.S. Design Patent D627,778 eventually issued covering the design of the iPhone 4.  All’s well that ends well.


Had this story played out in the past few weeks, it might not have had the same happy ending.  On September 16, 2011, the America Invents Act (AIA), a major modification to the Patent Act, was signed into law – a modification that makes the kind of “public disclosure” the iPhone 4 experienced a real impediment to an inventor’s securing a patent.


Prior to the AIA, the United States had a one-year grace period for all activities, including sale, use, and public disclosure.  In other words, a design could be shown, used, or sold and the inventor could still secure patent rights, provided that the application was filed within one year of the disclosure.  (Even outside that one-year period, exceptions could apply if an inventor displayed a design before that time  for experimental purposes.)  This grace period is why Apple was still able to secure rights in the iPhone 4 and, importantly, also preserve rights in foreign countries.


As part of the AIA, on March 16, 2013, the United States adopted a first-inventor-to-file regime. Under this regime, a public disclosure (including publication, sale, or public use of a complete product design without filing for a design patent protection beforehand) will constitute a dedication of that design to the public, including competitors.  The AIA includes a limited one-year grace period for certain disclosures by the inventor or obtained from the inventor; however, the exact boundaries of this grace period are uncertain and the federal courts will take many years to define them.


If the iPhone 4 scenario occurred today, under the new AIA, there would be many questions with unknown answers.  For instance, did the finder or the website “obtain” the design from the inventor at Apple?  Or does the fact that the engineer lost the prototype in public somehow break the disclosure chain back to the inventor?  Perhaps the attempt to disguise the design means that some “experimental use” exception should apply?


At the application stage, such decisions will be in the hands of the PTO examiner.  It will be the patent applicant’s burden to prove that the grace period should apply and that a patent should issue despite a public disclosure.  It’s worth noting that appeals from an adverse decision by the examiner can take as much as four years.  Thus a company could be forced to decide whether it should risk investing in a design that it may not own and anyone could use.  Should a patent be granted and these types of disclosures come to light later, an accused infringer would surely raise similar issues in litigation.  Uncertainties could drag on for years.


While it may seem unfair, unauthorized public disclosures have destroyed patent rights in the past.  The PTO has invalidated design patents based on photographs taken without permission.  Automobile trade magazines have a long history of covertly photographing new car designs on the test track and publishing the photos.  In 2010, the PTO found a Ford design patent for a truck grille was obvious (and thus not patentable) in view of a poorly-lit, partially obstructed view from a spy photo published in a trade magazine.


Spy photos and lost prototypes are only the tip of the iceberg.  Typically, the launch of a new product design includes market research, independent testing, previews to select retailers and journalists, and discussions with suppliers about how to manufacture the proposed design.  While some of these activities may be protectable under confidentiality agreements, in light of the AIA, this area of the law is far from clear.


Companies who rely on innovative product design to separate themselves from their competitors must effectively manage these risks.  It is understandable that some are reluctant to file for a design patent application while a design may not be final, opting instead for market research or testing to be completed.  It may seem unnecessary to spend money to patent designs that might not be used in a final product.  But waiting to patent until a design is “tweaked” may create more problems than it solves.


Under the AIA, if a design is released to the public (and not later patented), then the public design could be used to deny a patent to the later “tweaked” design.  (When the tweaked design is not “different enough,” the PTO may deem it as an obvious variant over the original design.)  But a company’s own pending designs cannot be used against it, if the new designs are filed before the pending designs are published, currently when the design patent issues.


In the post-AIA world, the best practice is to follow Apple’s standard procedure and apply for a design patent before any type of release outside the company.  And if you’re considering releasing more than one possible design, file design applications on all of them in order to avoid creating problems later.  The additional upfront costs will be negligible compared to the costs – and time, and angst – of replacing an unprotected design later on.






 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 08:00

Beat the Odds in Cross-Border Joint Ventures

It’s proving to be an eventful year for AirAsia, the Kuala Lumpur-based airline that has emerged as Asia’s most successful low-cost carrier in recent times.  The last 12 months have seen the collapse of AirAsia Japan, a once-promising joint venture between AirAsia and Japan’s ANA, and the birth of AirAsia India, an alliance between the company and India’s Tata Group.


AirAsia’s experience is instructive.  The history of joint ventures is filled with stories about failure.  As happened at AirAsia Japan, partners often find it difficult to reconcile their views about how they should manage a new venture.  Even if strategic visions align, cultural differences and the inability to build trust often torpedo partnerships.


Despite the low odds of success, though, the urge to set up joint ventures remains strong.  That’s because either government regulations dictate joint ventures — for example, in the auto sector in China and multi-brand retailing in India — or because two companies believe they need each other’s complementary strengths, as in the case of AirAsia Japan.  It’s therefore important for corporate leaders to be smart about how they can improve the odds of success.  Five contemporary guidelines:



Define a  joint venture’s charter narrowly.  Doing so provides focus, reduces complexity, and enables companies to collaborate with different partners to meet their goals.  When Honda entered India in the early 1990s, the Japanese company struck three focused alliances: One with the Hero Group for low-end motorcycles, one with Siel for cars, and a third with Siel for portable generators.
Choose a partner that embodies a low risk of conflict in the long run.  The chances of breaking up are high if partners’ long-term ambitions are in conflict, and each sees the joint venture as a stepping-stone to learn from the other before competing with it.  Several joint ventures in China, such as the alliance between General Motors and Shanghai Auto, are beset by this underlying tension.  AirAsia has made a smart choice by tying up with the Tata Group; that alliance is high on complementarities and low on conflicts.
Allocate decision rights based on the context and logic.  Who has the final say in functional areas, such as R&D, operations, and human resources, does matter.  For instance, in Japan, ANA ceded control to AirAsia on key decisions such as customer service levels.  Given the differences between the expectations of the Japanese low-cost traveler and his counterpart in the rest of Asia, it may have been smarter for ANA to have retained the final call on those decisions.
Consciously over-invest in building mutual understanding and trust.  All joint ventures are mixed motive games; value creation requires cooperation while value capture requires focusing on what’s best for one’s shareholders.  Since it isn’t feasible to anticipate every contingency and build them into a contract, it’s important that partners  focus their efforts on cultivating mutual understanding and trust.  An excessive or premature focus on value capture will leave them fighting over the crumbs instead of striving to make the pie bigger.
Agree upfront on the terms that will guide a break-up.  As happened at AirAsia Japan, all joint ventures eventually end.  Upfront clarity on how the end game will play out often has unintended positive consequences.  It will help partners devote their efforts to the motive that brought them together in the first place, viz. to maximize the synergistic benefits from their complementary strengths.

After all, the partners in a relationship usually realize intuitively when to end it.  What they don’t know is how to make a joint venture work.






 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 07:00

Let Them Eat MOOCs

One late afternoon last spring I received a visit from a former student and budding entrepreneur. I usually schedule these meetings at the end of the workday. It feels like a treat, witnessing aspiration and insight blend into leadership to create something new.


Luis (not his real name), however, had not come to see me for leadership advice. He had come to pitch his tech startup and ask for my involvement.


The venture, he explained, would contribute to the ongoing disruption and reinvention of business education and allow anyone anywhere — not just those as fortunate as himself — to have access to my teaching and insights online, for free.


While I would not be compensated, I’d have the opportunity to reach a broader audience and to be at the front — and on the right side — of the online revolution in education. I would become a better teacher, help democratize management learning, and secure my own and my school’s place among the survivors and beneficiaries of digital disruption.


I had heard all those arguments before. Reach. Scale. Efficiency. Democratization.  This was my third such conversation in six months, including one with a pioneer of Massive Online Open Courses (MOOCs), the first wave of a digital tsunami headed towards the shores of higher education.


When I pointed out that I already share and discuss ideas freely online, in this blog and on Twitter, Luis beamed. That was why he had reached out, he said.


Apparently I have the right profile for a MOOC professor. I’m young enough to be threatened, good enough to be useful, and tech savvy enough to be interested. (Perhaps also vain enough to be flattered). My fondness for the Internet as a public agorá is surely a sign that I want it to become my open classroom as well.


Actually, no. It isn’t. When it comes to joining this battle I declare myself a conscientious objector.


Mind you, I am not unsympathetic to the argument for MOOCs and their derivatives — that many people who need knowledge and skills don’t have the resources to acquire them in those expensive and inefficient bundles called “universities.” Nor am I blind to the problems facing business schools and higher education at large, or lacking in my enthusiasm for technology. I am not immune to flattery either.


I can easily concede that for many topics, the right numbers and platform may foster online learning and interactions as meaningful as those that take place in the average classroom or seminar room, specially for students and faculty accustomed to living part of their social lives online. And I believe that the conscious intent of MOOC proselytizers is altruistic.


However, as the Princeton sociologist who discontinued his popular MOOC illustrated, if you are a prominent faculty member at an elite university the idealistic prospect of spreading free knowledge to the masses may distract you from pondering your MOOC’s more troublesome potential social consequences.


MOOCs can be used as a cost-cutting measure in already depleted academic institutions and become another weapon against battered faculty bodies. They may worsen rather than eliminate inequality by providing credentials empty of the meaning and connections that make credentials valuable.


Worst of all, they may become a convenient excuse for giving up on the reforms needed to provide broad access to affordable higher education. The traditional kind, that is, which for all its problems still affords graduates higher chances of employment and long-term economic advantages.


Seen from this perspective, the techno-democratization of education looks like a cover story for its aristocratization. MOOCs aren’t digital keys to great classrooms’ doors. At best, they are infomercials for those classrooms. At worst, they are digital postcards from gated communities.


This is why I am a MOOC dissenter. More than a revolution, so far this movement reminds me of a different kind of disruption: colonialism.


Given the resources and players involved in producing and praising MOOCs, it’s hard to argue that this is a case of enterprising outsiders toppling a complacent establishment. (Do you see any “outsiders” in this galaxy of MOOC funders?) It is far more similar to colonialism, that is, disruption brought about by “the policy and practice of a power in extending control over weaker people or areas” and simultaneously increasing its cultural reach and control of resources.


All educational institutions have a dual social function: to develop individuals and to develop culture. Sometimes development involves affirmation. Sometimes it involves questioning and reform.


All education therefore involves both training and socialization. The knowledge one acquires is not just concepts and skills to become a good employee but also values and mores to become a good citizen — of a society or an enterprise.


This is as true of the liberal arts college as it is of the professional school, corporate university or online diploma factory.


Colonialism is a particular kind of socialization. It involves educating communities into the “superior” culture of a powerful but distant center by replacing local authorities or co-opting them as translators. A liberating  education, on the other hand, makes students not just recipients of knowledge and culture but also owners, critics, and makers of it.


While they claim to get down to business and focus on training only, MOOCs do their fair share to affirm and promulgate broader cultural trends, like the rise of trust in celebrities’ authority, the cult of technology as a surrogate for leadership, and the exchange of digital convenience for personal privacy.


The idea that we should have access to anything wherever and however we want it for free, in exchange for the provider’s opportunity to use and sell our online footprint to advertisers or employers is the essence of digital consumerism. This is the culture that MOOCs are borne of and reinforce in turn.


Even the fabled personalization that digital learning affords is really a form of mass customization. There is no personal relationship. It is a market of knowledge where no one is known and care is limited to the provision of choices.


Whether its crusaders are venture capitalists, entrepreneurs, academics, or students, the colonizer is a transactional view of education, centered on knowledge as a commodity, which displaces a relational view of education, centered on developing through relationships. This in turn becomes, like all precious resources of colonial territories, no longer a common good but a leisurely privilege.


Luis nodded pensively when I pointed out that his venture could turn a job like mine and an education like his into even more of a privilege. So I asked him what he thought may happen when companies like his finished disrupting my profession.


Ultimately a teacher is a sophisticated search and social technology, he explained, in a crescendo of techno-utopianism. What we do is making judgments as to what knowledge is interesting and useful and ordering it in ways that make it accessible. We also broker connections through admissions and recruitment. There is no reason why an algorithm could not do all that someday.


I envisioned myself walking to a digital guillotine in tattered academic garb, whispering, “Let them eat MOOCs.” Luis laughed. I asked one last question.


Why would I want to help him make my job irrelevant? Because of legacy, he answered excitedly. I’d be proud that I was one of the people who taught the algorithm to think.


I’d rather keep going with humans.







 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2013 06:00

Marina Gorbis's Blog

Marina Gorbis
Marina Gorbis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marina Gorbis's blog with rss.