Marina Gorbis's Blog, page 777

November 20, 2018

A New Way to Become More Open-Minded

Danae Diaz/Getty Images

Benjamin Franklin knew he was smart — smarter than most of his peers — but he was also intelligent enough to understand that he couldn’t be right about everything. That’s why he said that whenever he was about to make an argument, he would open with something along the lines of, “I could be wrong, but…” Saying this put people at ease and helped them to take disagreements less personally. But it also helped him to psychologically prime himself to be open to new ideas.


History shows that we tend to choose political and business leaders who are stoic, predictable, and unflinching, but research indicates that the leadership we need is characterized by the opposite: creativity and flexibility. We need people who can be like Franklin — that is, smart and strong-willed enough to persuade people to do great things, but flexible enough to think differently, admit when they’re wrong, and adapt to dynamic conditions. Changing our methods and minds is hard, but it’s important in an era where threats of disruption are always on the horizon. In popular culture, we might call this kind of cognitive flexibility, “open-mindedness.” And with growing divisions in society, the survival of our businesses and communities may very well depend on our leaders having that flexibility — from Congress to the C-Suite.


Unfortunately, for decades academics have argued in circles about the definition of open-mindedness, and what might make a person become less or more open-minded, in part because there’s been no reliable way to measure these things. Recently, however, psychologists have given us a better way to think about open-mindedness — and quantify it.


The breakthrough happened when researchers started playing with a concept from religion called “intellectual humility.” Philosophers had been studying why some people stubbornly cling to spiritual beliefs even when presented with evidence that they should abandon them, and why others will instead quickly adopt new beliefs. Intellectual humility, the philosophers said, is the virtue that sits between those two excesses; it’s the willingness to change, plus the wisdom to know when you shouldn’t.


A few years ago, scientists from various universities started porting this idea into the realm of everyday psychology. Then in 2016, professors from Pepperdine University broke the concept of intellectual humility down into four components and published an assessment to measure them:



Having respect for other viewpoints
Not being intellectually overconfident
Separating one’s ego from one’s intellect
Willingness to revise one’s own viewpoint

An intellectually humble person will score high on all of these counts. But by breaking it down like this, the Pepperdine professors came up with a clever way to help pinpoint what gets in the way when we’re not acting very open-minded. (I, for example, scored low on separating my ego from my intellect — ouch!)


Still, philosophers focused on these concepts think there is one more piece to the puzzle. “I’m fussy about this,” explains Jason Baehr of Loyola Marymount University. He defines open-mindedness as the characteristic of being “willing and within limits able to transcend a default cognitive standpoint in order to take up seriously the merits of a distinct cognitive standpoint.” His point is that you can be intellectually humble (open to changing your mind about things), but if you’re never curious enough to listen to other viewpoints, you aren’t really that open-minded.


There is however, Dr. Baehr points out, a trait from the time-tested Big 5 Personality Assessment that helps fill in that gap. The trait is “openness to experience,” or a willingness to try new things or take in new information. If openness to experience means you’re willing to try pickle-flavored ice cream, intellectual humility means you’re willing to admit you like it, even if you initially thought you wouldn’t. A person who scores high on both of these will be likely to listen to people, no matter who they are, and have a kind of Ben Franklin-like cognitive flexibility after listening.


For my recent book, Dream Teams, I combined these two assessments — the Pepperdine Intellectual Humility test and the Big 5 Openness to Experience test — and conducted a series of studies of thousands of American workers with it to find correlations between open-minded people and the way they live and work.  You can take that assessment here. The results indicated that most people overestimate themselves: 95 percent of people rated themselves as more open-minded than average, which, of course, cannot be true! But this suggests that most leaders don’t know how much of a blind spot intellectual humility is in their work.


My studies showed that certain activities generally correlate with higher intellectual humility across the board. Traveling a lot — or, even better, living for extended periods in foreign cultures — tends to make us more willing to revise our viewpoints. After all, if we know that it is perfectly valid to live a different way than we do, it makes sense that our brains would be better at accepting new approaches to problems at work. This aligns with recent research on the neuroscience of how storytelling helps us build empathy for other people. (Read neuroeconomist Paul Zak’s HBR article on this fascinating subject here.) Fiction readers tend to score higher in intellectual humility, perhaps because their brains are a little bit better trained to seek out stories that vary from their own, and see characters’ experiences and opinions as potentially valid. Preliminary research is also showing us that practicing mindfulness meditation, learning about the ins and outs of your own ego using a framework like the Enneagram, and learning about Moral Foundations Theory through programs like Open Mind Platform can each help us operate with more intellectual humility.


There’s a lot more work to be done exploring ways to increase our intellectual humility — including research on how to definitively increase scores on each of the factors — but in the meantime, Ben Franklin demonstrated at least one hack we can all use right away: Because he wanted to learn and grow, he worked to deflate his own intellectual confidence. That trick of saying, “I could be wrong, but…” wasn’t just a way to get his conversational opponents to be less defensive; it was also a way of forcing himself to be open to changing his mind. After all, if someone countered his argument and won, he could still say, “See! I was right! I said, ‘I could be wrong,’ and I was!”




 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2018 07:00

What If Banks Were the Main Protectors of Customers’ Private Data?

Mark Wilson/Getty Images

The ability to collect and exploit consumers’ personal data has long been a source of competitive advantage in the digital economy. It is their control and use of this data that has enabled the likes of Google, Amazon, Alibaba, and Facebook to dominate online markets.


But consumers are increasingly concerned about the vulnerability that comes with surrendering data. A growing number of cyberattacks — the 2017 hacking of credit watch company Experian being a case in point, not to mention the likely interference by Russian government sponsored hackers in the 2016 US Presidential elections — have triggered something of a “techlash”.


Even without these scandals, it is likely that sooner or later every netizen will have suffered at some point from a bad data experience: from their credit card number being stolen, to their account getting hacked, or their personal details getting exposed; from suffering embarrassment from an inappropriate ad while at work, to realizing that their favorite airline is charging them more than they charge others for the same flight.


The most practical consequence of the concerns generated by data hacking has been the imposition of more stringent privacy regulation, of which the most obvious example is the European Union’s new GDPR. We can certainly expect this trend to continue around the world. And digital natives are likely to become more rather than less sensitive to the value of their data.


The obvious consequence from these trends is that the big tech firms will find it increasingly difficult to legally use the personal data they collect.  At the same time, that data can be a toxic asset as it is hard to keep safe and coveted by many. A company that collects more data than it really needs is unnecessarily generating more risk because each personal datum is the object of a potential leak or lawsuit.  And at least some of the personal data that companies gather generates little or no value for them — data that is inaccurate, out of date, unlawful, or simply irrelevant.


In this environment online merchants will have to find ways to do more with less data — whether through smarter application of analytics or because they introduce a business model that enables them to offer their services without collecting sensitive data.  But those changes do not really resolve the underlying challenge: how can consumers protect their digitized data?  Prior to the digital age, that data was kept on paper which meant it could be protected by physical means and was relatively difficult to share. Today, the IT skills needed to protect digitized data are beyond most consumers and even most of the traditional custodians of data.


This points to a business opportunity.  But whose?  One obvious possibility is that a few big tech companies — such as Apple, or maybe someone new — could become consumers’ data guardians.  Amazon, for instance, could offer an option on Prime in which it manages users’ personal data for them, liaising with other companies and platforms but remaining in control of the data.    There are problems with this approach.  If the company is new, users might be unwilling to give up their most sensitive information to an organization that has still to prove its trustworthiness. That would be less of a risk for a household name like Amazon, but in that instance, users might rightly hesitate to give an already hugely powerful digital corporation even more power over them.


Another option that consumers might take would be to follow the approach being explored by Solid, a project led by Tim Berners-Lee, the inventor of the World Wide Web.  Solid proposes that users store their personal data in virtual ‘pods’ that work like secure USBs through which users can share their data with whomever they want. It is uncertain whether the project will be able to garner enough support and resources to make it workable, widely available, and affordable. Users can store their pod with Solid — in which case we bump again into questions of trust and power — or keep it themselves. But if consumers keep the data themselves, that may not be as safe as it should be—a USB, physical or virtual, can easily get lost or stolen. It would be like keeping your money under your mattress.


This analogy brings me to perhaps the most likely scenario.  Maybe the best-suited institutions to manage digital data are banks. In a sense, banks are already data guardians. After all, most of the money in circulation is virtual, nothing but data. They also have a long history of being at the forefront of security methods, from the development of the vault to multi-factor authentication. Moreover, banks have experience in safeguarding privacy through their commitment to confidentiality. Finally, banks tend to have more local and personal relationships with their clients in ways that might make users feel safer than trusting their data to an international corporation. And if you’re unhappy with one bank, you can always switch to another. Banks’ business model and their experience give them a comparative advantage over other businesses to become our personal data guardians.


Of course, banks are far from perfect.  They are notoriously conservative, which may make them slower to roll out necessary updates to the technology involved. And regulatory hurdles might make it hard for them to expand their services. But given that governments have an interest in making sure their citizens can keep their personal data safe, and that banks may need to innovate and transform themselves in order to outlive fintech competition, these possible obstacles do not seem insurmountable. Maybe someday in the not so distant future we will keep our data where we keep our money.




 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2018 06:00

Replacing the Sales Funnel with the Sales Flywheel

clu/Getty Images

I’ve been using the sales funnel for 28 years, my whole career. This year, I retired the funnel — threw it a party, gave it a gold watch, and congratulated it on its move to a condo in Florida.


It was the right thing to do.


For one thing, in an era when trust in traditional sources has eroded — in government, media, and in companies and the marketing they employ — word-of-mouth from trusted peers wields greater clout than ever.


For another, the funnel fails to capture momentum. A boss of mine used to say, “The sun rises and sets on the quarter.” By the end of a quarter, she had wrung every ounce of energy out of marketing, and we started the next quarter from a standstill with no momentum and no leverage.


That’s no longer true. After years of inbound marketing, your company has assets: evergreen content; backlinks to your site; social media followings; and, of course, customers who advocate for your brand. For many of us, our marketing departments could take a vacation for a month, and new visitors and leads would continue to come in, and existing customers would continue to refer new business. That’s momentum.


The Flywheel

These days, instead of talking about the funnel, we talk about the flywheel. For us, flywheel is a powerful metaphor. The flywheel was used by James Watt over 200 years ago in his steam engine, the invention that powered the Industrial Revolution. It is highly efficient at capturing, storing, and releasing energy.


Using a flywheel to describe our business allows me to focus on how we capture, store and release our own energy, as measured in  traffic and leads, free sign-ups, new customers, and the enthusiasm of existing customers. It’s got a sense of leverage and momentum. The metaphor also accounts for loss of energy, where lost users and customers work against our momentum and slow our growth.


I’ve become obsessed with two dynamics that make our flywheel spin fast: force and friction.


Force

The more force you apply to a flywheel, the more places on it where you add force, the faster it spins.


When I started my career, the most profitable application of force was in sales. Back in the 1990s, sales reps had a lot of information, while customers had relatively little. Sales reps leveraged that information gap to create a lot of trust. It made a ton of sense to hire a lot of reps back then.


Around 2005, marketing became a bigger force driving growth. In many industries, the sales rep and the customer now had more or less the same information at the same time. Competitive advantage went to those marketers who created useful content to pull prospects in.


Today, it’s shifting again. Now, delighted customers are the biggest new driver of growth.


I’m a sales and marketing guy, so it makes sense that HubSpot’s early priorities reflected my instincts, with all our energy and force applied to sales and marketing, trying to close as many customers as possible. These days, we’ve shifted our center of gravity away from that and applied more force towards delighting our existing customers, knowing that’s the best way to find new customers.


I made a couple mistakes along the way. First, I just said, “hey, we’re going to be a ‘delight’ company!” The intention was right, but there was no operational impact.


Second, I assigned this to our customer service department. I said, ‘you’ve to fix this problem, we’ve got to delight our customers.” Neither of those things worked.


What worked was getting the whole organization behind it — especially Sales and Marketing.


Take our commission plan, for example. In 2015, a sales rep earned commission on everything they closed. Now, we’ve made two important tweaks to it: a carrot and a stick. The stick was very unpopular. If a sales rep closed an account, and that account cancelled within eight months, the company would “claw back” that commission. Painful, but effective.


The carrot was easier, and also effective. The sales reps who do the best job at setting expectations, who have high retention rates and the happiest customers, receive a kicker, they got paid at a higher rate.


That carrot and stick have changed how we think about the “force” part of our flywheel. Our sales reps are focused not only on closing customers, but on delighting customers.


We’ve done something similar with the quality of our leads. Prospects that are more likely to be successful customers get a higher lead score, rather than leads that are simply likely to close. We measure the success of our marketing based upon the volume of those most-likely-to-succeed leads.


Friction

The second thing James Watt would recommend is to eliminate friction in your flywheel.


I’m a true believer in low friction. I woke up this morning on my Purple mattress. I put on my Warby-Parker eyeglasses, picked up my phone and played Spotify. I made my way to my bathroom and shaved with my Dollar Shave Club razor. I reached into my closet and put on my new outfit from Trunk Club, and then I got in a Lyft and came to work.


These six companies have woven their way into my daily life. They are all fewer than 10 years old; they all sell relatively undifferentiated commodities; and they are all growing like a weed?  How do they do it?  What’s the secret handshake?


It’s friction — they’ve taken all the friction out of their flywheel. When I bought that Purple mattress, there was almost zero friction in the process. I did it in a few minutes online; they shipped it to my home; and if I decided to return it, the process promised to be simple and hassle-free.


Two years ago, I bought my previous mattress from a traditional seller — a so-called full service store. But, full service means handoffs between humans, it means haggling. Buyers have become much less patient, and less forgiving of friction.


All of these examples are B2C. If your business is B2C, the train is about to leave the station. You’ve got to get 90% of the friction out of your model.  If you’re B2B, the train is parked in the station, but it’s leaving soon.


One of my favorite business school professors used to say, “If you want to build a great company, your product has got to be ten times better than the competition.” Today, that advice feels out of date. If you want to build a great company in 2018, your customer experience has to be ten times lighter than the competition. It used to be what you sell that really matters, now it’s how you sell that really matters.


To eradicate friction, we have to turn some assumptions on their heads:


Customer interaction. In a “better product” market, 80% of the touches with your customers are handled by humans, your employees. Human touches entail friction. In a “lighter experience” market, 80% of your customer touches need to be self-service, and only 20% full service with humans.


IT investment. In a “better product” model, 80% of your IT resources is invested in making your front line employees more efficient. In a “lighter experience” model, 80% of your IT resources will be invested in making your customers more efficient.


Employee skills. In the “better product” era, when you grew, you added humans, and you placed them in specialized roles. In addition to a sales rep, you created a business development role for pre-sales, you hired a variety of hunters and farmers, you assigned an account rep to manage ongoing business. You hired and trained “I-shaped” employees who could dive deep into a specific domain. Specialists are great at handling specific customer issues. But, when you have multiple specialist roles, it means your customer is getting handed off from one specialist to another. And, if your customer is getting handed off, they are experiencing friction.


In the “better experience” era, you minimize handoffs. You hire and train “T-shaped” employees who can dive very deep in one discipline, but also have other expertise, and able to handle more complex customer interactions.


Unlike some changes in business philosophy, the flywheel is not an all-or-nothing proposition, Any tactical change to reduce friction, or organizational alignment of forces that optimize for customer delight, will have a measurable impact on customer experience. Early successes will breed increasing support for a full flywheel approach.


I’m glad we’ve ditched the funnel. The time has come for the flywheel.




 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2018 05:05

November 19, 2018

Grading The New York Times’ News Coverage, and Quick Takes on Random Things

Youngme Moon and Felix Oberholzer-Gee decide to “grade” The New York Times’ news coverage, before sharing their quick takes on other random things. They also share their After Hours picks for the week.


Download this podcast


For interested listeners:



AG Sulzberger Interview with Kara Swisher (Recode Decode Podcast)

You can email your comments and ideas for future episodes to: harvardafterhours@gmail.com. You can follow Youngme and Mihir on Twitter at: @YoungmeMoon and @DesaiMihirA.


HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 23:24

Sisterhood Is Power

From the Women at Work podcast:

Listen and subscribe to our podcast via Apple Podcasts | Google Podcasts | RSS


Download the discussion guide for this episode

Join our online community


Download this podcast


It takes time and care to develop trusting relationships with the women we work with, particularly women who are different from us in some way. But the effort of understanding each other’s experiences is worth it, personally and professionally: We’ll feel less alone in our individual struggles and better able to push for equity.


We talk with professors Tina Opie and Verónica Rabelo about the power of workplace sisterhood. We discuss steps, as well as common snags, to forming deep and lasting connections with our female colleagues.


Guests:


Tina R. Opie is an associate professor of management at Babson College.


Verónica Caridad Rabelo is an assistant professor of management in the College of Business at San Francisco State University.


Resources:


● “Survey: Tell Us About Your Workplace Relationships,” by Tina R. Opie and Beth A. Livingston

● “Women: Let’s Stop Allowing Race and Age to Divide Us,” by Ancella Livers and Trudy Bourgeois

● “How Managers Can Promote Healthy Discussions About Race,” by Kira Hudson Banks

● “How Managers Can Make Casual Networking Events More Inclusive,” by Ruchika Tulshyan


Sign up for the Women at Work newsletter.


Fill out our survey about workplace experiences.


Email us here: womenatwork@hbr.org


Our theme music is Matt Hill’s “City In Motion,” provided by Audio Network.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 11:54

Why “Many-Model Thinkers” Make Better Decisions

“To be wise you must arrange your experiences on a lattice of models.” 


Charlie Munger


Organizations are awash in data — from geocoded transactional data to real-time website traffic to semantic quantifications of corporate annual reports. All these data and data sources only add value if put to use. And that typically means that the data is incorporated into a model. By a model, I mean a formal mathematical representation that can be applied to or calibrated to fit data.


Some organizations use models without knowing it. For example, a yield curve, which compares bonds with the same risk profile but different maturity dates, can be considered a model. A hiring rubric is also a kind of model. When you write down the features that make a job candidate worth hiring, you’re creating a model that takes data about the candidate and turns it into a recommendation about whether or not to hire that person. Other organizations develop sophisticated models. Some of those models are structural and meant to capture reality. Other models mine data using tools from machine learning and artificial intelligence.


The most sophisticated organizations — from Alphabet to Berkshire Hathaway to the CIA — all use models. In fact, they do something even better: they use many models in combination.


Without models, making sense of data is hard. Data helps describe reality, albeit imperfectly. On its own, though, data can’t recommend one decision over another. If you notice that your best-performing teams are also your most diverse, that may be interesting. But to turn that data point into insight, you need to plug it into some model of the world — for instance, you may hypothesize that having a greater variety of perspectives on a team leads to better decision-making. Your hypothesis represents a model of the world.


Though single models can perform well, ensembles of models work even better. That is why the best thinkers, the most accurate predictors, and the most effective design teams use ensembles of models. They are what I call, many-model thinkers.


In this article, I explain why many models are better than one and also describe three rules for how to construct your own powerful ensemble of models: spread attention broadly, boost predictions, and seek conflict.


The case for models


First, some background on models. A model formally represents some domain or process, often using variables and mathematical formula. (In practice, many people construct more informal models in their head, or in writing, but formalizing your models is often a helpful way of clarifying them and making them more useful.) For example, Point Nine Capital uses a linear model to sort potential startup opportunities based on variables representing the quality of the team and the technology. Leading universities, such as Princeton and Michigan, apply probabilistic models that represent applicants by grade point average, test scores, and other variables to determine their likelihood of graduating. Universities also use models to help students adopt successful behaviors. Those models use variables like changes in test scores over a semester. Disney used an agent-based model to design parks and attractions. That model created a computer rendition of the park complete with visitors and simulated their activity so that Disney could see how different decisions might affect how the park functioned. The Congressional Budget office uses an economic model that includes income, unemployment, and health statistics to estimate the costs of changes to health care laws.


In these cases, the models organize the firehose of data. These models all help leaders explain phenomena and communicate information. They also impose logical coherence, and in doing so, aid in strategic decision making and forecasting. It should come as no surprise that models are more accurate as predictors than most people. In head-to-head competitions between people who use models and people who don’t, the former win, and typically do so by large margins.


Models win because they possess capabilities that humans lack. Models can embed and leverage more data. Models can be tested, calibrated, and compared. And models do not commit logical errors. Models do not suffer from cognitive biases. (They can, however, introduce or replicate human biases; that is one of the reasons for combining multiple models.)


Combining multiple models


While applying one model is good, using many models — an ensemble — is even better, particularly in complex problem domains. Here’s why: models simplify. So, no matter how much data a model embeds, it will always miss some relevant variable or leave out some interaction. Therefore, any model will be wrong.


With an ensemble of models, you can make up for the gaps in any one of the models. Constructing the best ensemble of models requires thought and effort. As it turns out, the most accurate ensembles of models do not consist of the highest performing individual models. You should not, therefore, run a horse race among candidate models and choose the four top finishers. Instead, you want to combine diverse models.


For decades, Wall Street firms have used models to evaluate investment risk. Risk takes many forms. In addition to risk from financial market fluctuations, there exist risks from geopolitics, climactic events, and social movements, such as occupy Wall Street, not to mention, risks from cyber threat and other forms of  terrorism. A standard risk model based on stock price correlations will not embed all of these dimensions.  Hence, leading investment banks use ensembles of models to assess risks.


But, what should that ensemble look like?  Which models does one include, and which does one leave out?


The first guideline for building an ensemble is to look for models that focus attention on different parts of a problem or on different processes. By that I mean, your second model should include different variables. As mentioned above, models leave stuff out. Standard financial market models leave out fine-grained institutional details of how trades are executed. They abstract away from the ecology of beliefs and trading rules that generate price sequences. Therefore, a good second model would include those features.


The mathematician Doyne Farmer advocates agent-based models as a good second model. An agent-based model consists of rule based “agents” that represent people and organizations. The model is then run on a computer. In the case of financial risk, agent-based models can be designed to include much of that micro-level detail. An agent-based model of a housing market can represent each household, assigning it an income and a mortgage or rental payment. It can also include behavioral rules that describe conditions when the home’s owners will refinance and when they will declare bankruptcy. Those behavioral rules may be difficult to get right, and as a result, the agent-based model may not be that accurate — at least at first. But, Farmer and others would argue that over time, the models could become very accurate.


We care less about whether agent-based models would outperform other standard models than whether agent-based models will read signals missed by standard models. And they will. Standard models work on aggregates, such as Case-Shiller indices, which measure changes in prices of houses. If the Case-Shiller index rises faster than income, a housing bubble may be likely. As useful as the index is, it is blind to distributional changes that hold means constant. If income increases go only to the top 1% while housing prices rise across the board, the index would be no different than if income increases were broad based. Agent based models would not be blind to the distributional changes. They would notice that people earning $40,000 must hold $600,000 mortgages. The agent based model is not necessarily better. It’s value comes from focusing attention where the standard model does not.


The second guideline borrows the concept of boosting, a technique from machine learning. Ensemble classification algorithms, such as random forest models consist of a collection of simple decision trees. A decision tree classifying potential venture capital investments might say “if the market is large, invest.” Random forests are a technique to combine multiple decision trees. And boosting improves the power of these algorithms by using data to search for new trees in a novel way. Rather than look for trees that predict with high accuracy in isolation, boosting looks for trees that perform well when the forest of current trees does not. In other words, look for a model that attacks the weaknesses of your current model.


Here’s one example. As mentioned, many venture capitalists use weighted attribute models to sift through the thousands of pitches that land at their doors. Common attributes include the team, the size of the market, the technological application, and timing. A VC firm might score each of these dimensions on a scale from 1 to 5 and then assign an aggregate score as follows:


Score = 10*Team + 8*Market size + 7*Technology + 4*Timing


This might be the best model the VC can construct. The second best model might use similar variables and similar weights. If so, it will suffer from the same flaws as the first model. That means that combining it with the first model will probably not lead to substantially  better decisions.


A boosting approach would take data from all past decisions and see where the first model failed. For instance, it may be that be that investment opportunities with scores of 5 out of 5 on team, market size, and technology, do not pan out as expected. This could be because those markets are crowded. Each of the three attributes —team, market size, and workable technology — predicts well in isolation, but if someone has all three, it may be likely that others do as well and that a herd of horses tramples the hoped for unicorn. The first model therefore would predict poorly in these cases. The idea of boosting is to go searching for models that do best specifically when your other models fail.


To give a second example, several firms I have visited have hired computer scientists to apply techniques from artificial intelligence to identify past hiring mistakes. This is boosting in its purest form. Rather than try to use AI to simply beat their current hiring model, they use AI to build a second model that complements their current hiring model. They look for where their current model fails and build new models to complement it.


In that way, boosting and attention share something in common: they both look to combine complementary models. But attention looks at what goes into the model — the types of variables it considers — whereas boosting focuses on what comes out — the cases where the first model struggles.


Boosting works best if you have lots of historical data on how your primary model performs. Sometimes, we don’t. In those cases, seek conflict. That is, look for models that disagree. When a team of people confronts a complex decision, it expects — in fact it wants — some disagreement. Unanimity would be a sign of group think. That’s true of models as well.


The only way the ensemble can improve on a single model is if the models differ. To borrow a quote from Richard Levins, the “truth lies at the intersection of independent lies.” It does not lie at the intersection of correlated lies. Put differently, just as you would not surround yourself with “yes men” do not surround yourself with “yes models.”


Suppose that you run a pharmaceutical company and that you use a linear model to projects sales of recently patented drugs. To build an ensemble, you might also construct a systems dynamics model as well as a contagion model. Say that the contagion model results in similar long-terms sales but a slower initial uptake, but that the systems dynamics model leads to a much different forecast. If so, it creates an opportunity for strategic thinking. Why do the models differ?  What can we learn from that and how do we intervene.


In sum, models, like humans, make mistakes because they fail to pay attention to relevant variables or interactions. Many-model thinking overcomes the failures of attention of any one model. It will make you wise.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 08:00

How Software Is Helping Big Companies Dominate

Andrew Brookes/Getty Images

Throughout the global economy, big companies are getting bigger. They’re more productive, more profitable, more innovative, and they pay better. The people lucky enough to work at these companies are doing relatively well. Those who work for the competition aren’t.


Policymakers have noticed. Antitrust and competition policy are seeing renewed interest, including recent hearings on the subject by the Federal Trade Commission. Headlines in publications ranging from The Nation to The Atlantic to Bloomberg warn of America’s “monopoly” problem with calls to break up big companies such as Google or Amazon or Facebook. “Imagine a day in the life of a typical American,” writes Derek Thompson in The Atlantic. “How long does it take for her to interact with a market that isn’t nearly monopolized?”


Antitrust deserves the attention it’s getting, and the tech platforms raise important questions. But the rise of big companies — and the resulting concentration of industries, profits, and wages — goes well beyond tech firms and is about far more than antitrust policy.


In fact, research suggests that big firms are dominating through their use of software. In 2011, venture capitalist Marc Andreessen declared that “software is eating the world.” Its appetizer seems to have been smaller companies.


What’s Driving Industry Concentration

Most industries in the U.S. have grown more concentrated in the past 20 years, meaning that the biggest firms in the industry are capturing a greater share of the market than they used to. But why?


Research by one of us (James) links this trend to software. Even outside of the tech sector, the employment of more software developers is associated with a greater increase in industry concentration, and this relationship appears to be causal. Similarly, researchers at the OECD have found that markups — a measure of companies’ profits and market power — have increased more in digitally-intensive industries. And academic research has found that rising industry concentration correlates with the patent-intensity of an industry, suggesting “that the industries becoming more concentrated are those with faster technological progress.” For example, productivity has grown dramatically in the retail sector since 1990; inflation-adjusted sales per employee have grown by roughly 50%. Economic analysis finds that most of this productivity growth is accounted for by a few companies such as Walmart who used information technology to become much more productive. Greater productivity meant lower prices and faster growth, leading to increased industry dominance. Walmart went from a 3% share of the general merchandise retail market in 1982 to over 50% today.


All of this suggests that technology, and specifically software, is behind the growing dominance of big companies.


IT Does Matter

In 2003, then-HBR-editor Nick Carr wrote an article (and later a book) titled “IT Doesn’t Matter.” Carr took issue with the common assumption “that as IT’s potency and ubiquity have increased, so too has its strategic value.” That view was mistaken, he argued:


“What makes a resource truly strategic—what gives it the capacity to be the basis for a sustained competitive advantage—is not ubiquity but scarcity. You only gain an edge over rivals by having or doing something that they can’t have or do. By now, the core functions of IT—data storage, data processing, and data transport—have become available and affordable to all. Their very power and presence have begun to transform them from potentially strategic resources into commodity factors of production. They are becoming costs of doing business that must be paid by all but provide distinction to none.”


Carr distinguished between proprietary technologies and “infrastructural” ones. The former created competitive advantage, but the latter were more valuable when broadly shared and so eventually became ubiquitous and were not unique to any company. IT would temporarily create proprietary advantages, he predicted, citing Walmart as an example. Walmart is the country’s largest employer and largest company by revenue and it reached that position through an operating model made possible by proprietary logistics software. But Carr believed that by his writing in 2003 “the opportunities for gaining IT-based advantages are already dwindling” and that “Best practices are now quickly built into software or otherwise replicated.”


It didn’t turn out that way. Although rivals have tried to build their own comparable logistics software and vendors have tried to commoditize it, Walmart’s software acumen remains part of its competitive advantage — fueled now by a rich trove of data. While Walmart faces new challenges competing online, it has maintained its logistics advantage against many competitors such as Sears.


The “Full-Stack” Startup

This model, where proprietary software pairs with other strengths to form competitive advantage, is only becoming more common. Years ago, one of us (James) started a company that sold publishing software. The business model was to write the software and then sell licenses to publishers. That model still exists, including in online publishing where companies like Automattic, maker of the open source content management system WordPress, sell hosting and related services to publishers. One-off licenses have given way to monthly software-as-a-service subscriptions, but this model still fits with Carr’s original thesis: software companies make technology that other companies pay for, but from which they seldom derive unique advantage.


That’s not how Vox Media does it. Vox is a digital publishing company known, in part, for its proprietary content management system. Vox does license its software to some other companies (so far, mostly non-competitors), but it is itself a publisher. Its primary business model is to create content and sell ads. It pairs proprietary publishing software with quality editorial to create competitive advantage.


Venture capitalist Chris Dixon has called this approach the “full-stack startup.” “The old approach startups took was to sell or license their new technology to incumbents,” says Dixon. “The new, ‘full stack’ approach is to build a complete, end-to-end product or service that bypasses incumbents and other competitors.” Vox is one example of the full-stack model.


The switch from the software vendor model to the full-stack model is seen in government statistics. Since 1998, the share of firm spending on software that goes to pre-packaged software (the vendor model) has been declining. Over 70% of the firms’ software budgets goes to code developed in-house or under custom contracts. And the amount they spend on proprietary software is huge — $250 billion in 2016, nearly as much as they invested in physical capital net of depreciation.


How Big Companies Benefit

Clearly, proprietary software is providing some companies advantage and the full-stack model is dominating the software-vendor model. The result is that large firms are gaining market share. But to explain that, one needs to explain why some companies are so much better at developing software than others and why their innovations don’t seem to be diffusing to their smaller competitors the way Carr thought they inevitably would.


Economies of scale are certainly part of the answer. Software is expensive to build but relatively cheap to distribute; larger companies are better able to afford the up-front expense. But these “supply-side economies of scale” can’t be the only answer or else vendors, who can achieve large economies of scale by selling to the majority of players in the market, would dominate. Network effects, or “demand-side economies of scale,” are another likely culprit. But the fact that the link between software and industry concentration is pervasive outside of the tech industry — where companies are less likely to be harnessing billions of users — suggests network effects are only part of the story.


Part of the explanation for rising industry concentration, then, seems to hinge on the fact that software is more valuable for firms in combination with other industry-specific capabilities. These are often referred to as “intangible assets,” but it’s worth getting more specific than that.


Research suggests that the benefits of information technology depend in part on management. Well-managed firms get more from their IT investments, and big firms tend to be better managed. There are other “intangible” assets that differentiate leading firms, and which can be difficult or costly to replicate. A senior executive who has worked at a series of leading enterprise software firms recently told one of us (Walter) that a company’s ability to get more from an average developer depended on successfully setting up “the software to make software” — the tools, workflows, and defaults that allow a programmer to plug in to the company’s production system without having to learn an endless number of new skills.


Patents and copyright also make it harder for software innovations to spread to other companies, as do noncompete agreements that keep employees from easily switching jobs. But one of the biggest barriers to diffusion — and therefore one of the biggest sources of competitive advantage for the firms that excel at software — comes down to how companies are organized.


Architectural Innovation

In 1990, Rebecca Henderson, now a professor at Harvard Business School, published a paper that provides a theoretical basis for the success of full-stack startups. At the time, multiple thinkers were grappling with the question of why big, successful, cash-rich companies were sometimes unseated by new technologies. Incumbent companies aren’t necessarily bad at using new technologies, Henderson argued, based on her study of the photolithography industry. In fact, incumbents were great at using new technologies to improve individual components of their products. But when a new technology fundamentally changed the architecture of that product — the way everything fit together — the incumbent struggled.


Her point was that a company’s way of doing things is often deeply interconnected with the architecture of the products or services it creates. When the architecture changes, all the knowledge that was embedded in the organization becomes less useful, and the company’s way of doing things goes from advantage to disadvantage.


For example, Walmart’s competitive edge depended on having an organization and business model that took advantage of its logistical prowess by emphasizing Everyday Low Prices, large assortment, and rapid response to changes in tastes. Even though its larger competitors such as Sears spent heavily on IT, they could not compete effectively without making fundamental architectural changes. If all Walmart had done was apply IT to one component of the retail system — say, digitizing catalogs or bringing them online — Sears might have been in a better position to compete. But Walmart changed not only how supply chains, product decisions, and pricing worked, but how they related to each other. Sears’ entire existing way of doing things was suddenly a disadvantage.


As Dixon, the VC, clearly recognized, these architectural innovations can create openings for startups. “Before [Lyft and Uber] were started, there were multiple startups that tried to build software that would make the taxi and limo industry more efficient,” Dixon has noted. If Uber had merely created software for dispatching taxis, incumbents would have been well positioned to adopt it, according to Henderson’s theory. One “component” of the service would have been changed by technology (dispatching) but not the entire architecture of the service. But ridesharing startups like Uber and Lyft didn’t didn’t just make taxis more efficient; they fundamentally changed the way the different pieces of the system fit together.


Architectural innovation doesn’t necessarily result in startups displacing incumbents. It can also determine who prevails in a competition between larger, older firms. In November 2007, Forbes put the CEO of Nokia on its cover and asked, “Can Anyone Catch the Cell Phone King?” Apple had launched the iPhone just months before.


Why was Apple, a company with no prior experience in phones, able to overtake the cell phone king? Earlier this year, Harvard Business School professor Karim Lakhani asked this question to a group of conference attendees. One of us (Walter) listened as the technology experts in the audience listed all the ways the iPhone was superior: touch screen, app store, web browser, etc. Lakhani then provided the dates at which Nokia had offered those features: an app store in 2001, a touch screen in 2002, a web browser in 2006. Why, then, did Apple prevail?


Lakhani’s answer is that Apple had the right architecture to bring phones into the internet age. Apple and Nokia both had plenty of the intangible assets necessary to excel in the smartphone business, including software developers, hardware engineers, designers. But Apple’s structure and culture were already based around the combination of hardware and a software ecosystem to which third parties contributed. It already had experience building hardware, operating systems, and software development kits from its PC business. It had built a software platform to deliver content to mobile devices in the form of iTunes. Steve Jobs initially resisted letting developers build apps for the iPhone. But when he eventually gave in, the app store became the iPhone’s key advantage. And Apple was able to manage it because of its existing “architecture.”


Like any theory, architectural innovation can’t explain everything. If experience building operating systems and SDKs were so key, why didn’t Microsoft invent the winning smartphone? Apple’s particular acumen in product design clearly mattered, too. But architectural innovation helps explain why certain capabilities are so tough to replicate.


Spreading the Benefits of Software

The challenge for policymakers worried about industry concentration, markups, and the power of giant companies is to spread the benefits of the digital economy – of software – more broadly. Antitrust may be able to help in extreme cases, including in reining in the tech platforms and their ability to buy up competitors. But policymakers should also consider ways to help software and software capabilities diffuse throughout the economy. To some degree, economies of scale will simply increase the average size of firms, and that’s ok. But banning non-competes would help employees spread their knowledge by moving jobs. Reforming patents, which aren’t always necessary to protect software innovation and are abused by patent trolls to the detriment of nearly everyone, would help, too. Anything governments can do to encourage the use of open source software could help as well. For example, the French government mandates that public administrative bodies thoroughly review open source alternatives when revising or building new information technology and to use the savings realized to fund further open source development.


Encouraging startups is another promising avenue, as these firms are able to organize around software capabilities to take on incumbents. Doing so through public policy isn’t always easy, but government funding can help when done well, and at the state and city level policymakers can encourage the formation of technology clusters. These policies would pair well with more aggressive merger review, to ensure that promising startups are not all swallowed up by the incumbents they’re challenging.


For companies, the takeaway is more obvious. Even if you’re not in the software industry, there’s a good chance your success hinges on your ability not just to use but also to build software. Using vendors often still makes financial sense, of course. But consider what makes your company unique, and how software might further that advantage. Investing in proprietary solutions that complement your strengths might be a good idea, especially for medium and large companies and for growth startups.


A Cloud on the Horizon

There is some good news: research suggests that cloud computing is helping smaller, newer firms to compete. Also, some firms are unbundling their advanced capabilities. For example, Amazon now offers complete fulfillment services including two-day delivery to sellers, large and small, on its Marketplace. It may be that Carr was right in principle but just had the timing wrong. But we wouldn’t bet on it. Some aspects of software will be democratized, including perhaps some areas where companies now derive competitive advantage. But other opportunities will arise for companies to use software to their advantage. One in particular stands out: even when machine learning software is freely available, the datasets to make it valuable often remain proprietary, as do the models companies create based on them. Policy may be able to help level that playing field. But companies that don’t invest in software and data capabilities risk being left behind.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 07:15

To Get More Done, Focus on Environment, Expectations, and Examples

JUSTIN TALLIS/Getty Images

In 2008, I was designing advertising products at Google. For the first time in my young career, I was going to lots of meetings, and my job had become as much about convincing, cajoling, and coordinating as it was about designing. My manager told me about a team that was working on the Google Help Forum. They needed a designer, and he thought the project was a good way for me to try my hand at a consumer-facing product. The project itself was unremarkable apart from one feature: my work had to be approved by Marissa Mayer.


Marissa was a VP personally responsible for reviewing and authorizing every change made to Google.com. She was a smart, passionate, forceful, and somewhat feared decision-maker who critiqued, gave direction, and (hopefully) approved proposals from various teams. During our meetings, I often envisioned an upside-down pyramid, where the time of many rested on the decisions of one. We were product managers, designers, and leaders. Each one of us would take Marissa’s decisions back to our team, make a plan, and get to work. Our team members were another layer in the pyramid. Their time also depended on her choices.


We all make choices, every day, about how to spend our time. Most of our decisions are small, but over time they add up, and eventually they become decisions about how we’re spending our lives — and our work lives. When managers are careless with their decisions, it creates big problems for their teams. But when they are deliberate and thoughtful, it can create opportunities and give their teams the time they need to do valuable work.


Below I’ve put together a list of tips to help leaders of all kinds be deliberate with their choices, based largely on my years advising startup founders on product, marketing, and management at Google Ventures — and my subsequent work studying and experimenting with personal time-management techniques for my book Make Time. They fall under three categories: the environment you create, the expectations that you have, and the example your choices and actions set.


Environment

1. Treat new tools as debt. Before you add a new product, process, or platform to your company, ask yourself if it’s worth it. There will always be new technologies and processes you can adopt — an app promising better communication, a service promising smarter collaboration. But these products don’t always deliver. And when you’re overeager about trying shiny new things, it can hurt your team more than it helps them. People may become bogged down incorporating a new tool into their workflow, or scattered while attempting to learn a new process. Of course these things can be useful if the timing is right and the strategy is solid, but they also come at a cost.


2. Block as a team. Blocking your calendar is a simple and defensible way to make time for the work that matters. You can supercharge this tactic by agreeing to block your calendar as a team. When everyone in a group or department has the same “do not schedule” blocks on their calendar, it’s much easier to spend that time focused on work.


3. Make your workplace a place for work. Ironically, most offices are not great for getting work done, and open floor plans deserve most of the blame. Moving walls may not be realistic, but you can change the default behavior of your team by instituting Library Rules. Jason Fried, co-founder and president of 37signals and co-author of Rework, has a brilliant suggestion: Swap one default (you can talk to anyone anytime) with a different default, one that everybody already knows (act like you’re in a library).


4. Keep it small. Large teams have more overhead than small ones. Complicated projects have more unknowns than simple ones. Long timelines encourage people to take on unnecessary work. This probably seems obvious, but my experience is that most leaders make things bigger than they need to be. Keep teams, projects, and timelines as small as possible.


Expectations

5. Reward the right behaviors. The 21st Century workplace is full of rewards for long hours and fast responses: compliments, promotions, and cultural badges of honor. If you want to get better, more valuable work from your team, think about which behaviors you reward — even if those rewards are small and unconscious. Instead of thanking someone who promptly replies to an after-hours email, encourage them to write a thoughtful response while at the office. Rewarding people who spend their time productively will encourage team members to practice that behavior, and discourage the notion that overwork is better work.


6. Have a contact contract. We have so many ways to keep in touch at work — writing emails, sending chats, scheduling meetings, hopping on calls. Which form of communication is the most appropriate, and when? You can help your team decide by having an open discussion about everyone’s preferences and then making guidelines that work for the majority. Think about timeliness, thoughtfulness, interruption, and synchronicity. The decisions you come to don’t have to be a literal contract, but they should create an understanding about when and how to communicate.


7. Don’t ask for updates. Nothing triggers anxiety like an email from the boss late in the day: “Hey, can you send me a quick update on Project Alpha?” This kind of message appears urgent — even if it’s not — and it will likely take time for your employee to respond. They may have to run numbers or ask collaborators for updates. A better way to keep tabs on projects is to ask your team for summaries. Explain to them that summaries come at the end of a project, or mark a milestone, and include: the results, the lessons learned, and what needs to happen next. It’s a semantic difference, but it’s significant. If you set clear deadlines, your team can anticipate when a summary is due, and plan updates around the data you want to see.


8. Be mindful of what you say, because everyone’s listening. When leaders make careless comments or suggestions, they can unintentionally change the workflow of their teams. As an employee, I’ve seen this happen many times, but my favorite example comes from Fried and his co-author, David Heinemeier Hansson, in their book about productivity, It Doesn’t Have to be Crazy at Work: “It takes great restraint as the leader not to keep lobbing ideas at everyone else. Every such idea is a pebble that’s going to cause ripples when it hits the surface. Throw enough pebbles in the pond and the overall picture becomes as clear as mud.” Leaders need to recognize the weight their words carry, and practice speaking with thoughtful intention.


9. Don’t expect consensus. Getting everyone to agree before moving forward with a decision can waste time if consensus is not realistic. In fact, a little conflict often inspires learning and innovation, especially on diverse, thoughtful teams. The key, then, is to collect input from everyone, consider your options, and then make a decision based on what you think is best given the information you have. Be transparent with your team about how you made the decision — what you considered, and why — and set time aside to answer questions. People should walk away with a clear understanding of your choice and how it affects their work. This will save you time later on.


Examples

10. Turn off the green dot.  Your decisions about how you spend your time sets the example for your employees. As a leader, you might want them to know you’re available when they need you — but if being logged in and responsive at all times becomes your default, it might become theirs too. Projecting this kind of presence (whether in person or in the form of a logged-in “green dot”), sends the message that it’s okay for people to interrupt you whenever you’re needed, or worse, that the company values the appearance of availability over the time and focus needed to do great work. The solution is to create boundaries. Be straightforward about your time, when you need to focus, and when you are free. A good option is to create “office hours” — periods when anyone can drop in or schedule time with you — and regular check-ins with direct reports. These meetings will allow you to give people your undivided attention when you’re available to do so.


11. Be thoughtful, not reactive. When leading new initiatives, take the time to thoughtfully write your ideas down and consider them. Try not to “think out loud” in meetings. Even if you are brainstorming with others, avoid making a decision on the spot. Give yourself the mental space you need to feel confident that the decision you make is the best path forward. This will save time down the road, and help your team avoid unnecessary road blocks or last minute changes. Ask: How can I make this — product, service, or company — better right now? What are the first steps?


12. Take real breaks. Leave the office early. Take a weekend getaway. Go on a long vacation. And when you do, tell your team you’ll be out of the office and offline. Delegate people to make decisions while you’re out, or defer those decisions until you come back. Real breaks can make you a better leader, a happier person, and set the standard that people need, and deserve, time off.


If you’ve ever wished for better work, greater job satisfaction, or less stress for your team, you have the power to make those changes by rethinking the decisions you make about time. New behaviors have a funny way of becoming habits. What sounds crazy and new right now will seem normal and inevitable in a couple of years. Take these ideas as experiments you can run with, and start testing them tomorrow.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 07:00

What Kind of Happiness Do People Value Most?

Carol Yepes/Getty Images

Sure, everyone wants to be happy. But what kind of happiness do people want? Is it happiness experienced moment-to-moment? Or is it being able to look back and remember a time as happy? Nobel Prize winner Daniel Kahneman described this distinction as “being happy in your life” versus “being happy about your life.”  Take a moment to ask yourself, which happiness are you seeking?


This might seem like a needless delineation; after all, a time experienced as happy is often also remembered as happy. An evening spent with good friends over good food and wine will be experienced and remembered happily. Similarly, an interesting project staffed with one’s favorite colleagues will be fun to work on and look back on.


But the two don’t always go hand in hand. A weekend spent relaxing in front of the TV will be experienced as happy in the moment, but that time won’t be memorable and may even usher feelings of guilt in hindsight. A day at the zoo with one’s young children may involve many frustrating moments, but a singular moment of delight will make that day a happy memory. A week of late nights stuck at the office, while not fun exactly, will make one feel satisfied in hindsight, if it results in a major achievement.


While happiness scholars have long grappled with which form of happiness should be measured and pursued, nobody has simply asked people which version of happiness they seek. But if we want to find ways to be happy, it may help to understand what type of happiness we truly want.


In a series of studies, recently published in The Journal of Positive Psychology, we directly asked thousands of people (ages 18 to 81) about their preference between experienced and remembered happiness. We found that people’s preferences differed according to the length of time they were considering — and according to their culture. For Westerners, the happiness most people said they wanted for the next day was different from the happiness they said they wanted for their lifetime, even though one’s days add up to one’s life. We found this interesting; if people make decisions by the hour, they may end up with a different version of happiness than what they say they want for their life.


In one study, we asked 1,145 Americans to choose between experienced happiness (“where you experience happiness on a moment-to-moment basis”) and remembered happiness (“where afterwards you will reflect back and feel happy”) for either a longer timeframe (i.e., their life overall or next year) or a shorter timeframe (i.e., their next day or hour). The majority of participants chose experienced happiness over remembered happiness when choosing for their life (79%) or their next year (65%). By contrast, there was a roughly even split of participants who chose experienced happiness and remembered happiness when choosing what they wanted for their next hour (49%) or day (48%).  This pattern of results was not affected by individuals’ overall happiness, impulsivity, age, household income, marital status, or parental status.


After participants made their choices, we asked them to write a short paragraph explaining why. We found that those who favored experienced happiness mostly expressed a belief in carpe diem: a philosophy that one should seize the present moment because the future is uncertain and life is short. On the other hand, participants’ explanations for choosing remembered happiness ranged from a desire for a longer lasting happiness, to a nostalgic treasuring of memories, to the motivation to achieve in order to feel productive and proud.


So people became more philosophical when asked to consider longer time periods like their life overall, and they reported wanting more happiness experienced in the moment. But when they thought about the next day or hour, it was as though a Puritan work ethic emerged — more people seemed to be willing to forfeit those moments of happiness, to put the work in now to be able to look back later and feel happy. This willingness is necessary, of course, during certain periods of life. But defaulting to it too often may lead to missing out on experiencing happiness. Those unseized moments add up, and together they may go against what many believe constitutes a happy life.


We conducted a few more studies to test the robustness of our results. In one study, we gave people different definitions of remembered happiness to see if a particular portrayal was driving the result. In another, we varied how soon the hour was that they were considering (“one hour today” vs. “one hour toward the end of your life”) to see if imminence and perhaps impatience played a role in people’s preferences. In both cases, these treatments didn’t change the pattern we saw: when choosing for their life, most people chose experienced happiness over remembered happiness; but when choosing for an hour, half chose remembered happiness.


Last, we wanted to test whether the pattern we saw among all of our American participants generalized to other cultures. We presented the same choice between experienced and remembered happiness, for either their next hour or for their life, to approximately 400 people in other Western countries (England and the Netherlands) and 400 in Eastern countries (China and Japan).


Like Americans, when choosing for their life, the majority of Europeans (65%) chose experienced happiness over remembered happiness; but when choosing for their next hour, the Puritan work ethic appeared even more strongly with a majority (62%) choosing remembered happiness over experienced happiness.


In contrast, Easterners’ preferred happiness persisted across timeframes. The majority of Easterners chose experienced happiness over remembered happiness regardless of whether choosing for their life (81%) or their next hour (84%). Why this consistency? We believe that participants in China and Japan were more clear in their preference for experienced happiness due to the long religious history in Eastern cultures of teaching the value of mindfulness and appreciating each present moment.


Our studies asked thousands of individuals which of two types of happiness—experienced or remembered—they preferred. We found that the answer depends on whether people are considering the short pieces of their life or their life overall, and where they’re from. Though the pursuit of happiness is so fundamental as to be called an inalienable right, the particular form of happiness individuals pursue is surprisingly malleable.


It’s important to note that while this research helps us understand people’s beliefs about which happiness is preferable, it does not prescribe which form of happiness would be better to pursue. But these results reveal that Westerners planning their lives by the day or the hour will likely achieve a different version of happiness than what they themselves believe makes a happy life. We’re all too busy, and we’re driven to turn down opportunities to constantly feel happy. But if you believe you want a life of happiness experienced in the moment, think twice before preventing yourself from achieving it.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 06:00

How Geisinger Health System Reduced Opioid Prescriptions

The devastating opioid epidemic in the U.S. is a crisis that was created, in part, by healthcare itself as prescriptions for pain-relieving medications rapidly increased in the 2000s. Now, healthcare is at the forefront in trying to fix the problem. At Geisinger, a healthcare system serving more than 1.5 million patients in Pennsylvania and New Jersey, where we work, we are taking a multifaceted approach and seeing a big impact.  By combining data-driven assessments, targeted engagement of high prescribers, EHR-based interventions, and pharmacist support in care management, Geisinger has dramatically reduced opioid prescribing.  In addition, programs promoting safe medication disposal in the surrounding communities reduced the number of left-over opioids in medicine cabinets, helping to stem opioid abuse in the surrounding communities.


Data-driven assessment


Using our robust data archive system which captures electronic health record, medical and prescription claims, and other information, we create dashboard displays for use by leadership, operational teams, and clinicians to reveal population-level trends and identify patients and clinicians for targeted interventions. For example, in 2012 we launched a controlled-substance monitoring dashboard to better understand use of controlled substances in our system, with a focus on improving pain management. This dashboard, updated in real-time, displays a myriad of types of information on a population and patient level including counts of patients prescribed opioids, those co-prescribed other controlled substances, patient use of naloxone, visits to emergency departments, medication use agreements, location heat maps, and other data to help us identify gaps in care and opportunities for improvement.


Engaging high prescribers


Using the dashboard, we quickly saw that clinicians were prescribing opioids at vastly different rates. In addition to developing programs on pain-management for our broad clinician population, we targeted a focused group of high prescribers. Generally, high prescribing clinicians were unaware of their rates. After alerting them about their pattern, we began regular feedback sessions where the prescriber, practice-site medical director, and a specially trained chronic-pain pharmacist reviewed individual patient cases and provided guidance on dose reduction strategies, risks associated with certain co-prescribed agents such as benzodiazepines, alternative treatment options, clinical support tools, and referral options. The combination of prescribing transparency, education, and one-on-one counseling has helped to dramatically reduce the number of high volume prescribers and cut the prescribing of new opioids by 44% over the past three years.


EHR-based interventions


Concurrent with our work with high volume prescribers, we deployed population-level approaches to modifying prescriber behavior. We transitioned from paper to electronic prescriptions for controlled substances, which reduces the likelihood of prescription tampering, and linked our state prescription drug monitoring program to prescriptions made within our EHR, making it easier for providers to review a patient’s profile for other controlled substances before completing a new opioid order. Additionally, our EHR limits the number of days worth of medication allowed for any new opioid prescription resulting in a reduced number of opioids dispensed per prescription.


Pharmacist-supported pain management


Since 2011, Geisinger pharmacists specializing in pain management have worked closely with patients and the rest of the healthcare team to reduce patients’ dependence on opioids while still managing their pain. In addition to optimizing the medication regimen, pharmacists recommend activities, physical therapy, and behavioral health interventions that can help patients cope with pain and reduce dependence on opioids.


Now deployed in 15 primary care and specialty sites across Geisinger, these pharmacists actively manage pain medications for over 1,500 patients. Within 12 months of enrollment in pharmacist care, patients’ morphine milligram equivalent (MME) dose per day (a measure of how much prescription pain medication a patient takes per day) is on average reduced by half from 50 MME to 25 MME with 33% of patients tapering off opioids completely.


Medication disposal


Pain medications are frequently not completely used and are the most common type of drug to result in leftovers. These may then be sold, shared, or sit in medicine cabinets where family members or others may find them. According to the Pennsylvania Youth Survey, which is administered every two years to students from 6th through 12th grade, 39% of young people reporting drug use acquire prescription drugs from a family member in their household. Geisinger has led a community effort to raise awareness of the problem, hosting community hearings, producing public service announcements, engaging students in schools and developing a robust media engagement campaign. Geisinger also leads efforts by local organizations to facilitate proper disposal of medications. Since 2014, Geisinger has collected over 15,000 pounds of medications from the community, largely through “take-back” services at its hospitals, pharmacies and other community locations; an estimated 10% of these medications are controlled substances.


Good outcomes


Because of these efforts we have seen a 30% decline in total opioids prescribed over the past two years. Among patients with chronic non-cancer pain, we’ve seen reduced health care utilization, including fewer emergency department visits.


We are now exploring additional approaches to reducing opioid prescribing while effectively managing pain, including using behavioral “nudges” to encourage appropriate prescribing as well as engaging patients through mobile technology to better monitor and manage pain.  We encourage every health system to adopt these and other best practice strategies while sharing new approaches to combatting the opioid epidemic.




 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 05:05

Marina Gorbis's Blog

Marina Gorbis
Marina Gorbis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marina Gorbis's blog with rss.