Marina Gorbis's Blog, page 1527
October 9, 2013
Why Kids — and Workers — Need to Get Their Hands Dirty
Children in the U.S. today interact much less with their physical environment than they used to. Few grow up building fences, designing go-karts or tinkering with their cars anymore; vocational high schools are all but closed. What do kids today do instead? The Kaiser Family Foundation recently reported that 8- to 18-year-olds spend 53 hours a week engrossed in entertainment media.
So what? Who cares? Manufacturers, those companies that create physical products requiring a blend of high-tech electronics and physical components, do. And if Americans want that sector of their economy to be strong, they should, too.
Take an aircraft maker like Boeing or a steel giant like Nucor. When the engineers and operators they hire lack real-world building experience, the organization has to teach them. Sure, these young people can fashion incredible structures in Minecraft and design and test products digitally, but many are out of touch with the physical world, what we might think of as tactile intelligence. Many have no practiced knowledge about how metal or plastic bends, breaks, retains heat or burns, no practical understanding of how to limit size for fuel efficiency while allowing enough space for technicians to reach inside and connect components. If you haven’t physically handled and experimented with woods, metals, plastics, it’s difficult to imagine how to engineer an airplane wing that can, for example, keep bending to 140% of its maximum load without damage, and only fail beyond that.
Manufacturers are therefore faced with a daunting choice: outsource the design and testing to countries where tactile intelligence is still high, or fill the knowledge gap of new hires in the U.S. The first solution is problematic since competitive advantage in manufactured products often stems from design, and out-sourcing can endanger the retention of core capabilities and limit innovation. The second solution requires time and energy but we would argue that it’s worth the investment.
Boeing’s “Opportunities for New Engineers” program, for example, allows employees to physically create aerospace products. The challenge might be to build a wood/composite miniature airplane from design through development, building, testing, and flying. The finished product must meet stringent and practical requirements:
Dual Mission: Cargo/Reconnaissance
Weight: 20 lbs empty; Wingspan: 16 ft
Payload: 5 lb payload, 5 Watts supplied to payload
Endurance/Range: 10 hour full-sun/1 hour no-sun (cruise) endurance, 10 mile range
Single battery charge (full-daylight)
90% of (cruise) power from solar
Transportable by car.
Assemble by two people.
Recurring cost: $X
Non-recurring cost: $Y
With senior engineers as mentors, participants learn by doing. They see and feel how the parts physically fit or don’t. They understand the touch and finesse needed to bend the wing and the physical strength of a thick versus a thin cross section. They see where years of engineering theory clash with harsh realities. And these experiences lead to a better understanding of design, which translates to better engineering. Forcing young engineers to understand the balance of cost and technical excellence helps drive many to find more efficient solutions.
The “Opportunities for New Engineers” program’s most ambitious project, however, is to Build, Certify and Fly (BCnF) a Glasair Super II airplane. A 32-person team of mostly engineers will do everything from procuring and installing the avionics package and power plant to developing and performing the flight test program. Once certified in early 2014, the airplane will be utilized by the Boeing Employees Flight Association (BEFA) during airshows as a flight demonstrator and for training purposes.
Expertise is built through practice. The more time our “digital native” kids spend on entertainment media, the more we lose the tactile intelligence critical to design and manufacture physical products. So let’s encourage children to start physically building and tinkering again. Let’s encourage schools to let students dirty their hands in projects and experiments. Let’s close the gaps we already see in the next generation of workers. And let’s enable the tactile intelligence we need to remain competitive in our growing global marketplace.




How to Listen When Your Communication Styles Don’t Match
Why do people who consider themselves good communicators often fail to actually hear each other? Often it’s due to a mismatch of styles: To someone who prefers to vent, someone who prefers to explain seems patronizing; explainers experience venters as volatile.
This is why so many of us see our conversational counterparts as lecturing, belaboring, talking down to us, or even shaming us (if we are venters and they are explainers) or as invasive, out of control, and overly emotional (if we’re an explainer and they’re a venter).
Facing this kind of mismatch, what do you think the chances are for either person actually listening with an open mind?
My answer is… very low.
It is tempting to say “zero,” but since it’s not possible (or even desirable) to work only with people who match your communication style, you need to develop the skill to try to listen around their communication style.
Listening around that style, however, can be incredibly effortful. When someone is either venting/screaming or explaining/belaboring it triggers a part of your middle emotional brain called the amygdala, which desperately wants to hijack your attentive listening and instead react reflexively with whatever your hardwired reactions are. And resisting that amygdala hijack is exhausting.
What do to with a venter/screamer
If your conversational counterpart is a venter/screamer, your hardwired survival coping skill might be to tell them to calm down (which will only make them more upset), to shut down and get silent (which will only make them yell longer, because they’ll think you’re not listening), or to try to point out how irrational venting is (which, as noted above, they will perceive as patronizing and belaboring).
Instead, say to yourself, “Okay, here comes another temper tantrum. Just let them blow. Try not to take it between the eyes and imagine you’re looking into the calm eye of a hurricane and the storm is going over your shoulder.”
To do this, focus on their left eye. The left eye is connected to the right brain — the emotional brain. Let them finish. Then say, “I can see you’re really frustrated. To make sure I don’t add to that, and to make sure I don’t miss something, what was the most important thing I need to do in the long term, what’s the critical thing I need to do in the short term, and what do I need to get done ASAP?” Reframing the conversation this way, after they’ve finished venting, will make sure that your “explainer” self knows what to do – instead of ignoring the venting as another random outburst from “Conan the Barbarian” or “the Wicked Witch of the West.” Chances are, they do have something important they’re trying to tell you – even though they’re not communicating it very well.
After they respond, say to them, “What you just said is way too important for me to have misunderstood a word, so I’m going to say it back to you to make sure I am on the same page with you. Here’s what I heard.” Then repeat exactly, word for word, what they said to you. After you finish, say to them, “Did I get that right and if not, what did I miss?” Forcing them to listen to what you said they said, “because it was important,” will slow them down, will help you stay centered and in control, and will earn you their and your own respect.
What to do with an explainer/belaborer
If your conversational counterpart is an explainer, your hardwired survival coping skill might be to say to yourself, “Here they go again, make sure you smile politely even if you want to pull your hair out. Try not to let your impatience and annoyance show.” The problem with this is that even though they may be oblivious to others as they go on and on, at some level they may be aware of your underlying impatience and… that might actually make them talk longer. Yikes.
Realize that the reason they explain and belabor things is probably because their experience is that people don’t pay attention to what they say. They don’t realize that while that may be true of some truly distracted people, for others, the reason they don’t pay attention is that the speaker is belaboring something that the listener already heard — and doesn’t want to hear over and over again. Another possibility is that these explainers may not be feeling listened to somewhere else in their life (by their spouse, kids, parents, or boss) and is now relieved to have you as a captive audience.
When the explainer goes into his explanation/lecture/filibuster, say to yourself, “Okay, this is going to take a while.” Put a mental bookmark in whatever you were working on. Then look them in their left eye with a look that says, “Okay, take your time, I’m fully listening.” Instead of feeling frustrated and reacting by become impatient and fidgety, remind yourself, “They need to do this. I can be patient.”
Then when they finish then apply a similar response to the venter/screamer with the following minor edit:
“I can see that you really had a lot that you had to say. To make sure I don’t miss something, what was the most important thing I need to do in the long term, what’s the critical thing I need to do in the short term, and what do I need to get done ASAP?” ”
After they respond to that, say to them, “What you just said is way too important for me to have misunderstood a word, so I’m going to say it back to you to make sure I am on the same page with you. Here’s what I heard.” Then repeat exactly, word for word, what they said to you. After you finish, say to them, “Did I get that right, and if not, what did I miss?”
Your amygdala is probably saying to you and to me, “I don’t want to do either of those things. These people are obnoxious and unreasonable. Why should I kowtow to them?”
Here are several reasons:
They aren’t likely to change. These are deeply ingrained personality traits.
Being more open and inviting them to talk rather than closed and resistant will lessen their need to act this way. Listening patiently hath charm to soothe the savage (or boring) beast.
You will feel more self-respect and self-esteem. The above approaches will enable you to remain cool, calm, collected, centered and communicative in situation that formerly frustrated you and made you react poorly.




Doubts About Pay-for-Performance in Health Care
While health spending in the United States far surpasses that in other industrialized nations, the quality of care in the US is no better overall, and on several measures it is worse. This stark fact has led to a wave of payment reforms that shift from rewarding volume (as fee for service does) to rewarding quality and efficiency. Such pay-for-performance schemes seem to be common sense and are now widely used by private payers and Medicare. But astonishingly, there’s little evidence that they actually improve quality.
What do we really know about the effectiveness of using financial incentives to improve quality and reduce costs in health care? There is robust evidence that health care providers respond to certain financial incentives: medical students have a higher demand for residencies in more lucrative specialties, physicians are more likely to order tests when they own the equipment, and hospitals seek to expand care for profitable services at the expense of unprofitable services. It would seem that increasing payment for high-quality care (and, conversely, lowering payment for low-quality care) is an obvious way to improve value in health care. But evidence suggests that health care is no different from other settings where similar payment incentives have been tried, such as education and private industry. Not only do these payment policies often fail to motivate the desired behaviors, they may also encourage cheating or other unintended responses.
Overall, evidence of the effectiveness of pay-for-performance in improving health care quality is mixed, without conclusive proof that these programs either succeed or fail. Some evaluations of pay-for-performance programs have found that they can modestly improve adherence to evidence-based practice.
There is little evidence, however, that these programs improve patient outcomes, suggesting that to the extent that health care providers have responded to pay-for-performance programs, that response has been narrowly focused on improving the measures for which they are rewarded — such as making sure patients receive recommended blood tests if they have diabetes or the right cocktail of medications if they are hospitalized with a heart attack. Although these measures are important for patient care, it may take a full reengineering of the health care delivery system to broadly improve patient outcomes.
Despite considerable concern about unintended consequences in these programs, so far the adverse effects have been relatively minor, with little evidence that providers are avoiding high-risk or disadvantaged patients, gaming, or ignoring areas of care that are not financially rewarded in order to improve their apparent performance. The lack of evidence for unintended effects is perhaps not surprising, given the limited evidence of the intended effects of these programs, though concerns remain that as pay-for-performance incentives become stronger, and perhaps more effective, evidence of cheating may surface.
Given the wide adoption of pay-for-performance programs, it’s surprising is that even after extensive research, very little is known about how their design — including what outcomes are rewarded, the optimal size of incentives, and the criteria for payment (e.g., quality achievement or quality improvement) — affects provider behavior. Also, because nearly all of the evidence comes from programs that reward quality, we know almost nothing about whether pay-for-performance can improve efficiency or lower cost.
Experience with pay-for-performance in health care and other settings shows that these programs are hard to design. The best combination of performance measures, organizational level of accountability, criteria for payment, and incentive size is not obvious, and unintended consequences are common.
To be effective, we may need smarter incentives that take advantage of the cognitive biases that skew decision making, such as loss aversion. For example, evidence from a randomized trial in Chicago schools found that student math scores improved if teachers were paid in advance and forced to repay bonuses if an improvement standard was not achieved. Scores did not improve if teachers stood to receive incentives only after scores met improvement standards.
Other insights from psychology and behavioral economics may also have the potential to make pay-for-performance programs more effective. For example, another possible reason for the disappointing response to these program in health care is that targeted provider behavior is more likely to be intrinsically motivated (driven by the desire to reduce suffering, for example) and thus less likely to respond to external incentives such as payment. Examining the effects of pay-for-performance in other sectors underscores the difficulty of using performance pay for intrinsically motivated workers — and the pitfalls of trying.
Recent critiques of productivity pay claim that extrinsic incentives are only effective in situations in which tasks are routinized and narrowly defined, leaving workers with little intrinsic motivation, such as windshield installation. This view is supported by copious evidence from social science that financial rewards for intrinsically valuable activities – including performance in school, sports, and interesting work activities – undermine motivation and can decrease task performance. Perhaps not surprisingly, there is little evidence that pay-for-performance has been effective in U. S. education: A major program in New York City – including 20,000 teachers and $75 million – proved to be a high-profile failure. Dan Ariely and colleagues have also argued that for professionals working in situations where there is uncertainty about the relationship between inputs (such as the choice of diagnostic tools, reperfusion therapies, and discharge planning for patients admitted with acute coronary syndrome) and outputs (such as 30-day mortality), performance contracts cannot be sufficiently detailed to reward optimal practice in all circumstances. As a result, pay-for-performance can divert attention from the big picture and toward a myopic focus on meeting the performance goals that are typically defined in these contracts. Thus, even if we had pay-for-performance programs with smarter designs, it remains unclear whether we could overcome the fundamental problems associated with incentive contracts directed at narrow goals for intrinsically motivated activities.
Pay-for-performance was brought to health care to address a real problem: the suboptimal quality of our health care given our levels of spending. In the face of perverse financial incentives, health care providers’ intrinsic motivation to deliver quality has not been enough to provide sufficiently high-quality, high-value care in the United States. The root of these problems, however, may lie in system failures, not the failures of individual providers. While health care providers want to help the patient in front of them, they may not feel obligated (or have the incentive) to solve system-level problems stemming from factors they feel are outside their control. One potential solution lies in broader health reform, such as global payment for populations rather than piece-rate bonuses for individual patients. Coupled with public quality reporting, global payment reform has the potential to expand the scope of provider accountability, take advantage of providers’ intrinsic motivation, and improve population health. Such efforts may hold more promise for value improvement in US health care than attempts to exploit providers’ extrinsic motivation through tweaks to fee-for-service payment.
Follow the Leading Health Care Innovation insight center on Twitter @HBRhealth. E-mail us at healtheditors@hbr.org, and sign up to receive updates here.
Leading Health Care Innovation
From the Editors of Harvard Business Review and the New England Journal of Medicine

Leading Health Care Innovation: Editor’s Welcome
Coaching Physicians to Become Leaders
A Global Online Network Lets Health Professionals Share Expertise
How to Design a Bundled Payment Around Value




Don’t Let Them Steal Your Inventions
On March 18, 2010, an Apple engineer left what looked like an iPhone 3 in a German beer garden in Redwood City, California. Another patron later picked it up from a barstool. The next morning, the phone didn’t work (having been disabled remotely) but the finder realized the device looked a bit odd. It had a camera in the front and the exterior felt different. He was able to remove the exterior, revealing a shiny prototype for the new iPhone 4 – a product Apple wasn’t intending to announce for months.
Up until that March evening, Apple had been notoriously successful at concealing its new designs. Like clockwork, it would wait until just before unveiling a new product design to file a corresponding design patent application. For instance, Apple filed applications for the original iPhone only four days before it was announced in 2007; for the original iPod in 2001, the filing was one day before release.
The finder of the iPhone 4 tried calling Apple to return the phone, but no one called him back. About a month later, he sold the device to a website, which disassembled it, took pictures, and posted them on the Internet. By that time, Apple might have assumed the prototype was simply lost. But after the photos were posted, its lawyers jumped into action. That same day, Apple sent a letter to the website asking for its property back and filed a design patent application with the U.S. Patent and Trademark Office. Filed at 11:55 pm that night, U.S. Design Patent D627,778 eventually issued covering the design of the iPhone 4. All’s well that ends well.
Had this story played out in the past few weeks, it might not have had the same happy ending. On September 16, 2011, the America Invents Act (AIA), a major modification to the Patent Act, was signed into law – a modification that makes the kind of “public disclosure” the iPhone 4 experienced a real impediment to an inventor’s securing a patent.
Prior to the AIA, the United States had a one-year grace period for all activities, including sale, use, and public disclosure. In other words, a design could be shown, used, or sold and the inventor could still secure patent rights, provided that the application was filed within one year of the disclosure. (Even outside that one-year period, exceptions could apply if an inventor displayed a design before that time for experimental purposes.) This grace period is why Apple was still able to secure rights in the iPhone 4 and, importantly, also preserve rights in foreign countries.
As part of the AIA, on March 16, 2013, the United States adopted a first-inventor-to-file regime. Under this regime, a public disclosure (including publication, sale, or public use of a complete product design without filing for a design patent protection beforehand) will constitute a dedication of that design to the public, including competitors. The AIA includes a limited one-year grace period for certain disclosures by the inventor or obtained from the inventor; however, the exact boundaries of this grace period are uncertain and the federal courts will take many years to define them.
If the iPhone 4 scenario occurred today, under the new AIA, there would be many questions with unknown answers. For instance, did the finder or the website “obtain” the design from the inventor at Apple? Or does the fact that the engineer lost the prototype in public somehow break the disclosure chain back to the inventor? Perhaps the attempt to disguise the design means that some “experimental use” exception should apply?
At the application stage, such decisions will be in the hands of the PTO examiner. It will be the patent applicant’s burden to prove that the grace period should apply and that a patent should issue despite a public disclosure. It’s worth noting that appeals from an adverse decision by the examiner can take as much as four years. Thus a company could be forced to decide whether it should risk investing in a design that it may not own and anyone could use. Should a patent be granted and these types of disclosures come to light later, an accused infringer would surely raise similar issues in litigation. Uncertainties could drag on for years.
While it may seem unfair, unauthorized public disclosures have destroyed patent rights in the past. The PTO has invalidated design patents based on photographs taken without permission. Automobile trade magazines have a long history of covertly photographing new car designs on the test track and publishing the photos. In 2010, the PTO found a Ford design patent for a truck grille was obvious (and thus not patentable) in view of a poorly-lit, partially obstructed view from a spy photo published in a trade magazine.
Spy photos and lost prototypes are only the tip of the iceberg. Typically, the launch of a new product design includes market research, independent testing, previews to select retailers and journalists, and discussions with suppliers about how to manufacture the proposed design. While some of these activities may be protectable under confidentiality agreements, in light of the AIA, this area of the law is far from clear.
Companies who rely on innovative product design to separate themselves from their competitors must effectively manage these risks. It is understandable that some are reluctant to file for a design patent application while a design may not be final, opting instead for market research or testing to be completed. It may seem unnecessary to spend money to patent designs that might not be used in a final product. But waiting to patent until a design is “tweaked” may create more problems than it solves.
Under the AIA, if a design is released to the public (and not later patented), then the public design could be used to deny a patent to the later “tweaked” design. (When the tweaked design is not “different enough,” the PTO may deem it as an obvious variant over the original design.) But a company’s own pending designs cannot be used against it, if the new designs are filed before the pending designs are published, currently when the design patent issues.
In the post-AIA world, the best practice is to follow Apple’s standard procedure and apply for a design patent before any type of release outside the company. And if you’re considering releasing more than one possible design, file design applications on all of them in order to avoid creating problems later. The additional upfront costs will be negligible compared to the costs – and time, and angst – of replacing an unprotected design later on.




Beat the Odds in Cross-Border Joint Ventures
It’s proving to be an eventful year for AirAsia, the Kuala Lumpur-based airline that has emerged as Asia’s most successful low-cost carrier in recent times. The last 12 months have seen the collapse of AirAsia Japan, a once-promising joint venture between AirAsia and Japan’s ANA, and the birth of AirAsia India, an alliance between the company and India’s Tata Group.
AirAsia’s experience is instructive. The history of joint ventures is filled with stories about failure. As happened at AirAsia Japan, partners often find it difficult to reconcile their views about how they should manage a new venture. Even if strategic visions align, cultural differences and the inability to build trust often torpedo partnerships.
Despite the low odds of success, though, the urge to set up joint ventures remains strong. That’s because either government regulations dictate joint ventures — for example, in the auto sector in China and multi-brand retailing in India — or because two companies believe they need each other’s complementary strengths, as in the case of AirAsia Japan. It’s therefore important for corporate leaders to be smart about how they can improve the odds of success. Five contemporary guidelines:
Define a joint venture’s charter narrowly. Doing so provides focus, reduces complexity, and enables companies to collaborate with different partners to meet their goals. When Honda entered India in the early 1990s, the Japanese company struck three focused alliances: One with the Hero Group for low-end motorcycles, one with Siel for cars, and a third with Siel for portable generators.
Choose a partner that embodies a low risk of conflict in the long run. The chances of breaking up are high if partners’ long-term ambitions are in conflict, and each sees the joint venture as a stepping-stone to learn from the other before competing with it. Several joint ventures in China, such as the alliance between General Motors and Shanghai Auto, are beset by this underlying tension. AirAsia has made a smart choice by tying up with the Tata Group; that alliance is high on complementarities and low on conflicts.
Allocate decision rights based on the context and logic. Who has the final say in functional areas, such as R&D, operations, and human resources, does matter. For instance, in Japan, ANA ceded control to AirAsia on key decisions such as customer service levels. Given the differences between the expectations of the Japanese low-cost traveler and his counterpart in the rest of Asia, it may have been smarter for ANA to have retained the final call on those decisions.
Consciously over-invest in building mutual understanding and trust. All joint ventures are mixed motive games; value creation requires cooperation while value capture requires focusing on what’s best for one’s shareholders. Since it isn’t feasible to anticipate every contingency and build them into a contract, it’s important that partners focus their efforts on cultivating mutual understanding and trust. An excessive or premature focus on value capture will leave them fighting over the crumbs instead of striving to make the pie bigger.
Agree upfront on the terms that will guide a break-up. As happened at AirAsia Japan, all joint ventures eventually end. Upfront clarity on how the end game will play out often has unintended positive consequences. It will help partners devote their efforts to the motive that brought them together in the first place, viz. to maximize the synergistic benefits from their complementary strengths.
After all, the partners in a relationship usually realize intuitively when to end it. What they don’t know is how to make a joint venture work.




Let Them Eat MOOCs
One late afternoon last spring I received a visit from a former student and budding entrepreneur. I usually schedule these meetings at the end of the workday. It feels like a treat, witnessing aspiration and insight blend into leadership to create something new.
Luis (not his real name), however, had not come to see me for leadership advice. He had come to pitch his tech startup and ask for my involvement.
The venture, he explained, would contribute to the ongoing disruption and reinvention of business education and allow anyone anywhere — not just those as fortunate as himself — to have access to my teaching and insights online, for free.
While I would not be compensated, I’d have the opportunity to reach a broader audience and to be at the front — and on the right side — of the online revolution in education. I would become a better teacher, help democratize management learning, and secure my own and my school’s place among the survivors and beneficiaries of digital disruption.
I had heard all those arguments before. Reach. Scale. Efficiency. Democratization. This was my third such conversation in six months, including one with a pioneer of Massive Online Open Courses (MOOCs), the first wave of a digital tsunami headed towards the shores of higher education.
When I pointed out that I already share and discuss ideas freely online, in this blog and on Twitter, Luis beamed. That was why he had reached out, he said.
Apparently I have the right profile for a MOOC professor. I’m young enough to be threatened, good enough to be useful, and tech savvy enough to be interested. (Perhaps also vain enough to be flattered). My fondness for the Internet as a public agorá is surely a sign that I want it to become my open classroom as well.
Actually, no. It isn’t. When it comes to joining this battle I declare myself a conscientious objector.
Mind you, I am not unsympathetic to the argument for MOOCs and their derivatives — that many people who need knowledge and skills don’t have the resources to acquire them in those expensive and inefficient bundles called “universities.” Nor am I blind to the problems facing business schools and higher education at large, or lacking in my enthusiasm for technology. I am not immune to flattery either.
I can easily concede that for many topics, the right numbers and platform may foster online learning and interactions as meaningful as those that take place in the average classroom or seminar room, specially for students and faculty accustomed to living part of their social lives online. And I believe that the conscious intent of MOOC proselytizers is altruistic.
However, as the Princeton sociologist who discontinued his popular MOOC illustrated, if you are a prominent faculty member at an elite university the idealistic prospect of spreading free knowledge to the masses may distract you from pondering your MOOC’s more troublesome potential social consequences.
MOOCs can be used as a cost-cutting measure in already depleted academic institutions and become another weapon against battered faculty bodies. They may worsen rather than eliminate inequality by providing credentials empty of the meaning and connections that make credentials valuable.
Worst of all, they may become a convenient excuse for giving up on the reforms needed to provide broad access to affordable higher education. The traditional kind, that is, which for all its problems still affords graduates higher chances of employment and long-term economic advantages.
Seen from this perspective, the techno-democratization of education looks like a cover story for its aristocratization. MOOCs aren’t digital keys to great classrooms’ doors. At best, they are infomercials for those classrooms. At worst, they are digital postcards from gated communities.
This is why I am a MOOC dissenter. More than a revolution, so far this movement reminds me of a different kind of disruption: colonialism.
Given the resources and players involved in producing and praising MOOCs, it’s hard to argue that this is a case of enterprising outsiders toppling a complacent establishment. (Do you see any “outsiders” in this galaxy of MOOC funders?) It is far more similar to colonialism, that is, disruption brought about by “the policy and practice of a power in extending control over weaker people or areas” and simultaneously increasing its cultural reach and control of resources.
All educational institutions have a dual social function: to develop individuals and to develop culture. Sometimes development involves affirmation. Sometimes it involves questioning and reform.
All education therefore involves both training and socialization. The knowledge one acquires is not just concepts and skills to become a good employee but also values and mores to become a good citizen — of a society or an enterprise.
This is as true of the liberal arts college as it is of the professional school, corporate university or online diploma factory.
Colonialism is a particular kind of socialization. It involves educating communities into the “superior” culture of a powerful but distant center by replacing local authorities or co-opting them as translators. A liberating education, on the other hand, makes students not just recipients of knowledge and culture but also owners, critics, and makers of it.
While they claim to get down to business and focus on training only, MOOCs do their fair share to affirm and promulgate broader cultural trends, like the rise of trust in celebrities’ authority, the cult of technology as a surrogate for leadership, and the exchange of digital convenience for personal privacy.
The idea that we should have access to anything wherever and however we want it for free, in exchange for the provider’s opportunity to use and sell our online footprint to advertisers or employers is the essence of digital consumerism. This is the culture that MOOCs are borne of and reinforce in turn.
Even the fabled personalization that digital learning affords is really a form of mass customization. There is no personal relationship. It is a market of knowledge where no one is known and care is limited to the provision of choices.
Whether its crusaders are venture capitalists, entrepreneurs, academics, or students, the colonizer is a transactional view of education, centered on knowledge as a commodity, which displaces a relational view of education, centered on developing through relationships. This in turn becomes, like all precious resources of colonial territories, no longer a common good but a leisurely privilege.
Luis nodded pensively when I pointed out that his venture could turn a job like mine and an education like his into even more of a privilege. So I asked him what he thought may happen when companies like his finished disrupting my profession.
Ultimately a teacher is a sophisticated search and social technology, he explained, in a crescendo of techno-utopianism. What we do is making judgments as to what knowledge is interesting and useful and ordering it in ways that make it accessible. We also broker connections through admissions and recruitment. There is no reason why an algorithm could not do all that someday.
I envisioned myself walking to a digital guillotine in tattered academic garb, whispering, “Let them eat MOOCs.” Luis laughed. I asked one last question.
Why would I want to help him make my job irrelevant? Because of legacy, he answered excitedly. I’d be proud that I was one of the people who taught the algorithm to think.
I’d rather keep going with humans.




After a Failure, Shame Is Harmful, Guilt Is Productive
Which of these 2 common affective responses to failure was your most salient feeling after your last on-the-job misstep: shame or guilt? If shame, your company is mismanaging employees’ emotional responses to bad outcomes; if guilt, it’s doing the right thing, suggest Vanessa K. Bohns of the University of Waterloo in Canada and Francis J. Flynn of Stanford. By taking such actions as giving you specific feedback and emphasizing the widespread impact of your failures, your boss can minimize shame and maximize guilt, turning you away from despair and disengagement and instilling in you a desire for outward-focused action to redress the source of your guilty feelings.




Get the Right People to Notice Your Ideas
The email arrived the day after a speech I’d given in London. “You’ve definitely given me some food for thought about my career trajectory, and how to use branding to my advantage,” an executive at a management consulting firm wrote. In my talk, I’d emphasized the importance of content creation — blogging, videos, podcasting, or even the creative use Twitter — in enabling professionals to share their ideas and define their brands. “But,” she asked, “what advice do you have for making sure that anything you do is read by the right people?”
It’s a common question: why bother to blog (or use other forms of social media) when it’s so hard to build a following, and you may toil in obscurity for years before finding an audience? Given the seemingly abysmal ROI, isn’t it better to invest your time elsewhere? Indeed, Chris Brogan — now a prominent and successful blogger — revealed that it took him eight years to gain his first 100 subscribers. He was a hobbyist who painstakingly built his fan base over time; most of us simply don’t have the resources or the patience for such a slow-drip strategy.
But despite the fact that you’re unlikely to attract a million readers or “be discovered” overnight, blogging (and its social media brethren) are still a valuable part of professionals’ personal branding arsenal. Here are three strategies I’ve found helpful in ensuring that — sooner or later — the “right people” find out about your work.
The first strategy is to write about the people you’d like to connect with (or the companies you’d like to work for). In a world of Google Alerts, it’s not just large corporations that are monitoring what’s being said about them online. You’re unlikely to get a response from the Lady Gagas of the world, but most executives have lower profiles and are quite reachable. Twitter is particularly useful, especially if you focus on active users with fewer than 5,000 followers. Many top executives fall into this category; it means they’re likely to be paying attention to who is retweeting or messaging them, yet they aren’t overwhelmed by an excessive volume of correspondence. (In fact, proving even my cautionary note wrong, after a recent talk at the Stanford Graduate School of Business, a woman came up to me and said that her friend had created a video that she’d sent to Lady Gaga, who retweeted it and brought it massive exposure.)
Next, consider proactively sharing articles you create. That doesn’t mean spamming people with blast emails touting your latest post, but if a client or colleague asks a question or shares a story that inspires you to write, it’s a great compliment for you to follow up by sending them the piece. Alternately, if someone mentions a business challenge they’re struggling with, it adds to your credibility (and is quite thoughtful) for you to offer to send them a post you’ve written on the matter. And if you’re writing about figures you admire, odds are they’d welcome a quick note from you and a link to the article. (I always make it a point to let talented colleagues like Chris Guillebeau, John Hagel, and Len Schlesinger and his crew know when I’m citing their work.)
Finally, pursue a “ladder strategy” for your content, a concept that author Michael Ellsberg has expounded on. Sure, some people will find your blog accidentally (perhaps through a web search for a particular term), and your friends or colleagues may become early readers. But to build a following over time, start reaching out to fellow bloggers and news outlets that already have a following, and offer to create guest posts. (In my book Reinventing You, I feature finance author Ramit Sethi, who used the technique successfully and has blogged about how to do it.) That will expose new audiences to your work, and perhaps drive them to check out the “home base” at your own blog. It also brands you, as people will associate you with the outlets you write for or the people who have essentially endorsed you by allowing you to guest post. As your following grows, you’re more likely to be discovered by (and impress) “the right people” with your ideas.
As Chris Brogan’s experience shows, it can take years for your readership to grow organically. It’s unlikely that you’ll be “discovered” right away by a top CEO or VC trawling the Internet. But even from Day One, you can begin to reach key players if you’re strategic about the individuals and ideas you cover, proactively share your content (instead of waiting for others to stumble across it), and seek new and bigger outlets to feature your work. Before long, you won’t need to be discovered; the right people will already know who you are.




October 8, 2013
Is Losing Talent Always Bad?
Conventional wisdom might say that the recent departure of Marc Jacobs from Louis Vuitton is terrible news for the company. But if you look a little more closely at the fashion industry you’ll find that turning over your talent isn’t always a bad thing.
Prada is a case in point. Between 2000 and 2010 Prada lost a lot of designers to competing fashion houses, yet its fashion collections were consistently rated as much more creative than the average.
How does that happen? In a recent study (co-authored with Frederic Godart and Kim Claes) I found that when a designer leaves a fashion house to work for competition, he or she tends to stay in touch with friends and former colleagues from the old job. These ties act as communication bridges through which former colleagues can learn what the departed designer is up to in the new job. And when several designers leave to work for different fashion houses, the colleagues staying behind build bridges to lots of companies. This provides them with a lot of creative input for their future collections.
The phenomenon is not confined to fashion. McKinsey consultants famously stay in touch with former colleagues, who have left to to work for other firms, most of which are potential customers. The same thing happens in Silicon Valley where people change jobs across customers and competitors. To be sure, we are not talking about industrial espionage here. The positive effects of communication bridges on creativity come from friends catching up with friends in very general terms about what is going on in their professional lives.
Fashion houses that benefit the most from talent turnover also have long serving creative directors who mentor and befriend the new hires. At Prada, this is Miuccia Prada, who has a long tenure as the company’s creative director.
Prada (the company) gets infusions of fresh ideas every time it hires a new colleague. Prada (the designer) welcomes and helps train the newcomers. When a designer eventually leaves to work elsewhere, after a fruitful stint at Prada, she remains on good terms with former colleagues, spreading the message throughout the industry that Prada is a great place to work and learn. These positive tendencies are reinforced by a culture of transparency and collaboration in the company, as described by CEO Patricio Bertelli in an HBR article.
The messages to the non-fashion world are clear. Don’t part with former employees on bad terms and don’t forget about them. Stay in touch with them as they are your communication channels and ambassadors in the industry. Replace them with talent from different companies to preserve diversity of ideas inside your firm. And make sure senior executives take time to train and socialize the new hires.
Now every time we see someone wearing Prada, let’s think not only about the fashion, but also of the management lessons that we can learn from this company.




It’s Time for Episode-Based Health Care Spending
There is widespread agreement that if the United States is to achieve sustainable levels of health care spending, it must make greater use of payment mechanisms that reward physicians, hospitals, and health systems for the results achieved. The vexing question is how best to make this transition.
Today, payers and providers are using a range of strategies to accomplish this goal, including patient-centered medical homes, value-based contracting, and accountable care organizations (ACOs). We applaud this trend. However, our research and experience have convinced us that the transition to outcomes-based payment will occur more easily if both payers and providers take an intermediate step and make greater use of retrospective episode-based payment (REBP).
REBP focuses on “episodes of care” (any clinical situations that have relatively predictable start and end points such as procedures, hospitalizations, acute outpatient care, and some treatments for cancer and behavioral health conditions). REBP identifies which provider is in the best position to affect the clinical outcomes and total costs associated with an episode of care; it then assesses (through retrospective analysis of claims data) the outcomes achieved and costs incurred during each episode over a specific period of time (e.g., quarterly). The identified providers are then rewarded or penalized based on their average performance across all the episodes.
The desire to jump straight to outcomes-based payment models focused on the total cost of care for an entire population has led many payers and providers to overlook, or give up on, episode-based payment. We believe it is worth reconsidering.
The Advantages
REBP offers a number of advantages. For example, because it uses the current fee-for-services claims system as its administrative platform, it does not require providers to make significant investments in new infrastructure or establish new contractual arrangements with other providers. And because it focuses on acute episodes, REBP acts as a necessary complement to payment and care-delivery models designed to improve prevention and chronic-care management. Furthermore, administering and/or participating in an REBP model can help both payers and providers develop many of the capabilities they will need for total-cost-of-care management. In short, REBP can serve as a bridge to more comprehensive total-cost-of-care approaches.
How Does REPB Work?
In the U.S. health system today, a dozen or more providers may be involved in an episode of care, and each provider typically bills separately. None of these providers is rewarded financially for helping ensure that the desired clinical outcome is delivered with the highest quality at lowest cost across the entire episode.
REBP is designed to change that. It is somewhat similar to and shares many of the same goals as the “prospective bundled payment” approach, which calls for making a single payment (or budget) to the accountable provider for all the services used to treat each specific episode for each specific patient. But key differences in design and administration make REBP more scalable in the current U.S. health system.
The six core steps required to implement REBP are listed in the exhibit “Steps Required to Implement REPB.”
Why Should Payers Pursue REBP?
Our analysis of data from private insurers, Medicaid, and Medicare suggests that 50% to 70% of all health care spending could be included within episodes of care. REBP establishes end-to-end accountability for more than half that spending.
REBP also gives payers a direct way to incentivize providers to reduce health care waste. We have consistently observed that some providers deliver the same or better clinical outcomes at dramatically lower costs than other providers in the same market. The exhibit “Average Cost Per Episode Varies Significantly Across Providers” illustrates variations in average, total per-patient costs for three different episodes in three different states. Even after we excluded patients with certain complicated conditions and adjusted for patient severity, the average cost per episode in each market still varied from 60% to over 300%. Further analysis showed that much of this variation could be explained by differences in practice patterns (e.g., decisions about device selection, diagnostics use, discharge planning, hospital admission).
REBP also offers a quick path forward, because it requires only modest additional infrastructure. We have seen multiple payers define and implement the necessary infrastructure within six months of when they agreed on an episode’s definition. Turnkey analytic vendors are also beginning to emerge.
In addition, REBP gives payers considerable strategic flexibility. Many REBP parameters can be adapted to address local conditions, align with network and member-engagement approaches, and strengthen competitive advantage. Among these parameters are cost thresholds, stop-loss provisions, the degree of gain- and risk-sharing, whether and how to normalize unit prices, and whether to steer members to certain providers.
Finally, REBP gives payers a way to prepare for the future, when episode-based performance management is likely to be a required capability. Most providers do not have access to sufficient claims data to assess performance on their own. If providers are to accept partial or total cost-of-care accountability, payers will need to offer them a performance-management infrastructure to understand clinical outcomes and costs.
Why Should Providers Pursue REBP?
If contractual terms are fair, REBP can deliver meaningful value to acute-care providers in particular. For example, it has the potential to give them a net increase in margin, because many of the sources of savings are either variable costs to these providers (e.g., implantable devices, extra care required for surgical complications) or are associated with upstream or downstream providers (e.g., pharmaceuticals, physical therapy, skilled nursing facility care). REBP can also help acute-care providers reinforce and accelerate existing strategic priorities, such as improving how hospitals influence and partner with physicians, increase adoption of clinical pathways, and reduce input costs.
REBP empowers all accountable providers by reducing the need for payers to monitor clinical decision making (e.g., through preauthorization). It also positions them to assume a stronger role in influencing the performance of upstream and downstream providers.
Strong episode performance has the potential to strengthen a provider’s value proposition to patients, employers, and payers. It may also be grounds for negotiating a stronger network position.
Furthermore, REBP requires providers to make only small, if any, investments in new infrastructure — at least initially. And it will enable them to strengthen their ability to understand end-to-end performance, a capability any providers considering more holistic total-cost-of-care payment models will need.
What Changes Must Payers Make?
To implement REBP at scale, most payers will have to shift their focus from prospective to retrospective models in most markets. Doing so will enable payers to simplify their infrastructure and focus on analytic processes that are separate from claims adjudication. This infrastructure is less invasive, requires less investment, and offers faster time-to-market than do solutions that necessitate material changes to claims-adjudication processes.
Second, payers will need to develop greater technical sophistication to ensure fairness (e.g., through episode-specific risk adjustments) and provider acceptance. They will also have to develop or adopt new standards as they emerge. The Center for Medicare and Medicaid Innovation’s efforts to create standard episode definitions, including through the Bundled Payments for Care Improvement Initiative, are a promising starting point.)
Finally, if REBP is to succeed, payers will have to implement it at scale by promoting REBP, whenever possible, across all books of business and all network providers. Most payers should also strongly consider participating in multi-payer efforts to set standards to help overcome common barriers to implementation.
Follow the Leading Health Care Innovation insight center on Twitter @HBRhealth. E-mail us at healtheditors@hbr.org, and sign up to receive updates here.
Leading Health Care Innovation
From the Editors of Harvard Business Review and the New England Journal of Medicine

Leading Health Care Innovation: Editor’s Welcome
A Global Online Network Lets Health Professionals Share Expertise
How to Design a Bundled Payment Around Value
Providing High-Quality Health Care to Americans Should Trump Politics




Marina Gorbis's Blog
- Marina Gorbis's profile
- 3 followers
