Marina Gorbis's Blog, page 786

October 26, 2018

A Way to Detect Major Medical Complications Sooner

Nick Veasey/Getty Images

A study of surgical patients, across 168 hospitals, showed that 23% of patients experience a major complication during their stay. We like to think of complications as atypical events. However, the unfortunate truth is that they are quite common. While most medical complications are easily identified and are treated in a timely manner, not all are recognized soon enough. And delayed intervention means fewer treatment options and poorer outcomes.


My mother, Florence Rothman, was one of these patients whose complications were recognized too late; she died in 2003 in a hospital of avoidable causes. Her deterioration went unnoticed, and my brother and I have spent the last 15 years working to help prevent that next avoidable death.


There is one question that a clinician does not want to have to answer, “Why didn’t we see this patient’s problem sooner?” To deliver better care, doctors and nurses need to fully understand the patient’s current status in order to predict potential problems. Yet, many hospitals in the United States rely on only vital signs as status indicators and do not capitalize on the full complement of available patient information, especially nursing assessments — each nurse’s careful evaluation of his/her patient’s condition that is conveniently recorded in the electronic medical record (EMR). With this data, I believe that it is possible to implement an “unblinking eye”: a 24/7 evaluation of patient status that leverages patient data more completely. In an age of such tremendous technological innovation, health care must step up and change outdated processes to help save lives by integrating and embracing all patient data to identify deterioration sooner.


Insight Center



The Future of Health Care
Sponsored by Medtronic

Creating better outcomes at reduced cost.



By making better use of patient data and predictive models we can identify patients who are “smoldering” (in the words of one nurse): Those who may appear fine but have unseen damaging processes occurring internally. Sepsis can be such a process. It can tragically afflict otherwise well patients and is often identified too late, which is why it is the focus of a major worldwide effort to reduce its death toll.


But making a prediction is not enough. For a prediction to affect patient outcomes, it must meet the criteria that I term the prediction trifecta: It must be correct, timely, and provide new information.


Many prediction models currently in use at hospitals today rely on vital signs to satisfy the first criterion, “correct,” to identify an impending crisis. However, while identifying a patient who is deteriorating is easily done with vital signs alone, it is far more difficult for such a system to meet the next criterion, “timely,” and provide a warning when there are still options available to halt the deterioration.


Systems relying on vital signs rarely meet the third criterion, “new”, which is providing information that is not already known to the physicians and nurses. While vital signs are important and valuable, there are three intrinsic shortcomings in focusing on them for early warning:



Vital signs tend to be lagging indicators. The human body is built to maintain equilibrium with basic, vital operating parameters, and we do so by sacrificing functionality. Appetite fades. Digestion shuts down. Fluid builds in the extremities. All of these effects can happen while we maintain un-alarming vitals, so when the vitals fail, and decompensation is seen, it tends to be too late for effective intervention.
Vitals are generally only available to a predictive model when a nurse or a technician enters the data. The nurse is therefore ahead of the model and a model based on vitals rarely ever provides “new” information. Any predictive model based upon vitals alone is therefore unlikely to be either timely or new.
Further confounding vital sign-based models, normal variation in patients that are not in trouble tend to swamp the signal from those few who are, leading to high rates of false positives, alert fatigue, and clinician tune-out.

The goal of achieving this prediction trifecta is not to replicate what we see in frantic, hospital TV dramas with nurses and physicians racing to a patient’s bedside with a “code blue.” That patient has about a 17% chance of going home, if he or she is even revived.


Clinicians need predictions to be meaningful and receive them early in the deterioration process. Continuing with the sepsis example: In the time lag from inception to treatment, it’s critical to administer a bolus of fluids and IV antibiotics. One estimate claims an increase in mortality rate for every hour of delayed treatment. Mortality rates for early-detected sepsis are about 5%, but if it is allowed to progress, mortality rates approach 50%.


Nursing Assessments: The Key to Meaningful Prediction


Fortunately, there is another source of physiological data recorded periodically for every patient in the hospital’s EMR system. Nursing assessments, the structured evaluation of a patient’s physiological systems, can identify a patient’s deterioration from sepsis or other conditions and complications before it’s evident in vital signs or laboratory data. Yet, many current prediction models do not include this information.


Nurses conduct what’s termed a “head-to-toe” assessment on each patient, every day, every shift, in every hospital. It includes, for example, cardiac, respiratory, gastrointestinal, neurological, skin, psychosocial, and musculoskeletal assessments. For each evaluation, a nurse interacts with the patient to conduct and document a structured, hands-on review. If all the underlying factors of that assessment are normal, then the nurse deems it passed or met; if one or more of the factors is viewed as abnormal, then the assessment will be failed, or not met. For example, your skin is an organ. The skin nursing assessment reviews changes in skin texture, continuity, and color. In sepsis cases, skin failure may be an early indicator.


Every nursing assessment requires human, clinical judgment and provides both insight into the patient’s current state and predictive power to help identify patients who are at an elevated risk of an adverse outcome. Effective predictive models must include these leading indicators.


As an attempt to prevent what happened to our mother from happening again, my brother Steven and I developed a tool called the Rothman Index (RI), which includes nursing assessments, that can help provide an “unblinking eye” to support clinicians. Through the RI and the help of nursing protocols, Yale New Haven Health System has reduced in-hospital mortality by 20% to 30%, with special benefits in reducing sepsis mortality. Meaningful prediction, hitting the trifecta, has also helped the organization see a reduction in cost per sepsis case.


The inclusion of nursing data in predictive models makes profound sense: The nurse understands the patient’s condition. If we capture that nursing gestalt, and especially if we can do it electronically, we are on our way to reducing that critical time lag between inception of a possibly life-threatening complication, and action.


All models must be tested for their value in providing not only prediction but also for their value in providing meaningful prediction. It’s a concept that was inspired by one life, but as a standard practice, can put us on the path to saving countless others.




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2018 09:00

Do People Trust Algorithms More Than Companies Realize?

Westend61/Getty Images

Many companies have jumped on the “big data” bandwagon.  They’re hiring data scientists, mining employee and customer data for insights, and creating algorithms to optimize their recommendations.  Yet, these same companies often assume that customers are wary of their algorithms — and they go to great lengths to hide or humanize them.


For example, Stitch Fix, the online shopping subscription service that combines human and algorithmic judgment, highlights the human touch of their service in their marketing.  The website explains that for each customer, a “stylist will curate 5 pieces [of clothing].”  It refers to its service as “your partner in personal style” and “your new personal stylist” and describes its recommendations as “personalized” and “handpicked.”  To top it off, a note from your stylist accompanies each shipment of clothes.  Nowhere on the website can you find the term “data-driven,” even though Stitch Fix is known for its data science approach and is often called the “Netflix of fashion.”


It seems that the more companies expect users to engage with their product or service, the more they anthropomorphize their algorithms.  Consider how companies name their virtual assistants like Siri and Alexa.  And how the creators of Jibo, “the world’s first social robot,” designed an unabashedly adorable piece of plastic that laughs, sings, has one cute blinking eye, and moves in a way that mimics dancing.


But is it good practice for companies to mask their algorithms in this way? Are marketing dollars well-spent creating names for Alexa and facial features for Jibo? Why are we so sure that people are put off by algorithms and their advice?  Our recent research questioned this assumption.


The power of algorithms

First, a bit of background.  Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans.  Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases.  Algorithms predict recidivism of parolees better than parole boards.  And they predict whether a business will go bankrupt better than loan officers.


According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.  Experts in the 1950s were reluctant to believe that a simple mathematical calculation could outperform their own professional judgment.  This skepticism persisted, and morphed into the received wisdom that people will not trust and use advice from an algorithm.  That’s one reason why so many articles today still advise business leaders on how to overcome aversion to algorithms.


Do we still see distrust of algorithms today?

In our recent research, we found that people do not dislike algorithms as much as prior scholarship might have us believe.  In fact, people show “algorithm appreciation” and rely more on the same advice when they think it comes from an algorithmic a person.  Across six studies, we asked representative samples of 1,260 online participants in the U.S. to make a variety of predictions. For example, we asked some people to forecast the occurrence of business and geopolitical events (e.g., the probability of North America or the EU imposing sanctions on a country in response to cyber attacks); we asked others to predict the rank of songs on the Billboard Hot 100; and we had one group of participants play online matchmaker (they read a person’s dating profile, saw a photograph of her potential date, and predicted how much she would enjoy a date with him).


In all of our studies, participants were asked to make a numerical prediction, based on their best guess.  After their initial guess, they received advice and had the chance to revise their prediction.  For example, participants answered: “What is the probability that Tesla Motors will deliver more than 80,000 battery-powered electric vehicles (BEVs) to customers in the calendar year 2016?” by typing a percentage from 0 to 100%.


When participants received advice, it came in form of another prediction, which was labeled as either another person’s or an algorithm’s.  We produced the numeric advice using simple math that combined multiple human judgments.  Doing so allowed us to truthfully present the same advice as either “human” or “algorithmic.”  We incentivized participants to revise their predictions — the closer their prediction was to the actual answer, the greater their chances of receiving a monetary bonus.


Then, we measured how much people changed their estimate, after receiving the advice.  For each participant, we captured a percentage from 0% to 100% to reflect how much they changed their estimate from their initial guess.  Specifically, 0% means they completely disregarded the advice and stuck to their original estimate, 50% means they changed their estimate halfway toward the advice, and 100% means they matched the advice completely.


To our surprise, we found that people relied more on the same advice when they thought it came from an algorithm than from other people.  These results were consistent across our studies, regardless of the different kinds of numerical predictions.  We found this algorithm appreciation especially interesting as we did not provide much information about the algorithm.  We presented the algorithmic advice this way because algorithms regularly appear in daily life without a description (called ‘black box’ algorithms); most people aren’t privy to the inner workings of algorithms that predict things affecting them (like the weather or the economy).


We wondered whether our results were due to people’s increased familiarity with algorithms today.  If so, age might account for people’s openness to algorithmic advice.  Instead, we found that our participants’ age did not influence their willingness to rely on the algorithm.  In our studies, older people used the algorithmic advice just as much as younger people.  What did matter was how comfortable participants were with numbers, which we measured by asking them to take an 11-question numeracy test.  The more numerate our participants (i.e., the more math questions they answered correctly on the 11-item test), the more they listened to the algorithmic advice.


Next, we wanted to test whether the idea that people won’t trust algorithms is still relevant today – and whether contemporary researchers would still predict that people would dislike algorithms.  In an additional study, we invited 119 researchers who study human judgment to predict how much participants would listen to the advice when it came from a person vs. algorithm.  We gave the researchers the same survey materials that our participants had seen for the matchmaker study.  These researchers, consistent with what many companies have assumed, predicted that people would show aversion to algorithms and would trust human advice more–the opposite of our actual findings.


We were also curious about whether the expertise of the decision-maker might influence algorithmic appreciation.  We recruited a separate sample of 70 national security professionals who work for the U.S. government.  These professionals are experts at forecasting, because they make predictions on a regular basis.  We asked them to predict different geopolitical and business events and had an additional sample of non-experts (301 online participants) do the same.  As in our other studies, both groups made a prediction, received advice labeled as either human or algorithmic, and then were given the chance to revise their prediction to make a final estimate.  They were informed that the more accurate their answers, the better their chances of winning a prize.


The non-experts acted like our earlier participants – they relied more on the same advice when they thought it came from an algorithm than a person for each of the forecasts.  The experts, however, discounted both the advice from the algorithm and the advice from people.  They seemed to trust their own expertise the most, and made minimal revisions to their original predictions.


We needed to wait about a year to score the accuracy of the predictions, based on whether the event had actually occurred or not. We found that the experts and non-experts made similarly accurate predictions when they received advice from people, because they equally discounted that advice.  But when they received advice from an algorithm, the experts made less accurate predictions than the non-experts, because the experts were unwilling to listen to the algorithmic advice. In other words, while our non-expert participants trusted algorithmic advice, the national security experts didn’t, and it cost them in terms of accuracy. It seemed that their expertise made them especially confident in their forecasting, leading them to more or less ignore the algorithm’s judgment.


Another study we ran corroborates this potential explanation.  We tested whether faith in one’s own knowledge might prevent people from appreciating algorithms. When participants had to choose between relying on an algorithm or relying on advice from another person, we again found that people preferred the algorithm.  However, when they had to choose whether to rely on their own judgment or the advice of an algorithm, the algorithm’s popularity declined. Although people are comfortable acknowledging the strengths of algorithmic over human judgment, their trust in algorithms seems to decrease when they compare it directly to their own judgment. In other words, people seem to appreciate algorithms more when they’re choosing between an algorithm’s judgment and someone else’s than when they’re choosing between an algorithm’s judgment and their own.


Other researchers have found that the context of the decision-making matters for how people respond to algorithms.  For instance, one paper found that when people see an algorithm make a mistake, they are less likely to trust it, which hurts their accuracy.  Other researchers found that people prefer to get joke recommendations from a close friend over an algorithm, even though the algorithm does a better job.  Another paper found that people are less likely to trust advice from an algorithm when it comes to moral decisions about self-driving cars and medicine.


Our studies suggest that people are often comfortable accepting guidance from algorithms, and sometimes even trust them more than other people.  That is not to say that customers don’t sometimes appreciate “the human touch” behind products and services; but it does suggest that it may be not be necessary to invest in emphasizing the human element of a process wholly or partially driven by algorithms. In fact, the more elaborate the artifice, the more customers may feel deceived when learning they were actually guided by an algorithm. Google Duplex, which calls businesses to schedule appointments and make reservations, generated instant backlash because it sounded “too” human and people felt deceived.


Transparency may pay off.  Maybe companies that present themselves as primarily driven by algorithms, like Netflix and Pandora, have the right idea.




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2018 08:00

Research Shows Immigrants Help Businesses Grow. Here’s Why.

Fernando Trabanco Fotografía/Getty Images

It has been well documented that immigrants contribute disproportionately to entrepreneurship. This is true both in the United States, where they represent 27.5% of all entrepreneurs but only 13% of the population, and in many other countries around the world.


On average, immigrants contribute twice as much to U.S. entrepreneurship as native-born citizens do. But immigrants aren’t just creating more businesses; they’re creating more successful ones. A Harvard Business school study comparing immigrant-founded businesses to native-founded ones showed that immigrant-founded companies perform better in terms of employment growth over three- and six-year time horizons. The authors of the study, William R. Kerr and Sari Pekkala Kerr, conclude that immigrant-led companies grow at a faster rate and are more likely to survive long term than native-led companies are.


Why is this the case? Researchers are not completely sure, but as William Kerr has said, “The very act of someone moving around the world, often leaving family behind, might select those who are very determined or more tolerant of business risk.” It’s important to highlight that not all immigrants or non-immigrants are the same, and there is obviously a tremendous amount of variability between individuals. However, many of the qualities that would seem to make immigrants more likely to succeed in building their own businesses are reasons you should consider hiring them to help build yours.


A growth mindset

Success in today’s business environment requires having a “growth mindset.” A person with a growth mindset believes their talents are not stagnant. They believe they can do more by working hard, coming up with good strategies, and taking input from others. Such people achieve more than individuals with a fixed mindset, who tend to think they were born only with certain innate talents, which are unlikely to change.


A concept closely related to that of “growth mindset” is that of an “immigrant mindset.” People who are willing to uproot their lives in search of something better are the types of people who are determined to make change happen themselves. To migrate to a new country also takes a high level of confidence in one’s ability to change and a high level of tolerance for uncertainty. More importantly, they believe in their ability to figure things out and adapt once they get there.


Being unafraid of new challenges and proactively reaching for them is extremely important for long-term business survival. Those companies that do not continually innovate and adapt along with advances in technology and changes in society eventually see their products or services fade in importance. Meanwhile, competitors, or simply new and better ways of working, replace them. Growth demands that businesses view change as imperative, not optional. Immigrants, who are veterans of change, would appear to be likely to help businesses remain competitive and thrive.


Adaptability

It requires adaptation skills to survive, let alone to thrive, in a new place. When you’re in a brand new culture, and especially if you’re learning a new language, the need for change isn’t a one-off, but rather a continual daily requirement. This is why even immigrants who might have come from wealthy or privileged backgrounds in their home country tend to quickly lose any sense of entitlement. Adapting can be a painful and difficult process, one that takes place on an ongoing basis. It forces a reexamination of the familiar and requires a person to make changes to how they think and act.


Dharmesh Shah, founder and CTO at HubSpot and an immigrant to the United States, writes about many of the changes he made on an ongoing basis in order to fit in, from getting rid of his accent, to changing his appearance, and even temporarily changing his name to David.


Here too, immigrants may offer a benefit for employers. Businesses are increasingly finding that rapid adaptation is necessary for success in today’s competitive environment. Hiring immigrants may help you build the organizational muscle of adaptability that will enable your company to be more receptive to, and act upon, the continual change that is required of businesses today.


Diversity and inclusion

Immigrants usually improve a company’s ethnic and linguistic diversity, and they also bring a plethora of unique experiences, backgrounds, and knowledge to the workplace. And companies are paying attention to research finding that firms with more diverse people on staff have healthier financial performance, largely because non-homogenous teams tend to outperform teams with lots of similar people.


But hiring a more diverse workforce is only half the equation. Without giving people equal chances to participate and truly integrating them into all aspects of the business, teams won’t reach a state of high performance very quickly, and the unique aspects of individuals won’t be leveraged to the highest degree. This is where inclusivity comes in.


Immigrants know what it feels like to be an outsider. Throughout my career, I have noticed that the people on my teams who have either immigrated to a new country or spent extensive time living abroad are highly sensitized to the fact that others might not feel included. They tend to be more inclined to promote an inclusive way of working than employees without this experience. They are also more aware that others might contribute different experiences from their own. So, they tend to be more willing to hear voices that might otherwise go unheard in a business environment. Because they have experienced what it’s like to be different first-hand, they can also be more likely to be in tune with the realities of discrimination, both blatant and the more pervasive subtle kind. This, in turn, may make them eager to help prevent their colleagues from experiencing it.


Global readiness

One of the most frequently overlooked benefits that immigrants bring to a business context is that they have international experience. Knowledge of other cultures and languages might not seem critical for a business that isn’t yet selling outside of its home country, but in order to keep growing, nearly every business hits a point at which they need to expand beyond borders. And today, with most businesses having an online presence, they are global from day one.


Most companies are not prepared to handle global business from day one. They orient their firm around the needs of their home market alone. And when they do go global, it’s usually a painful process filled with plentiful organizational learning and growing pains.


People who bring experience from a different country and cultural context may be more likely to prevent a company from having to deal with such pains, while accelerating the company’s organizational learning about how to become a global company. In my role at HubSpot, leading international expansion and strategy for the company, I’ve found that many of the employees who have immigration experience tend to think about potential international challenges much earlier. They’re not just thinking about the markets you’re in now and the customers have today. They have a more global outlook on life itself, and they bring this perspective to their daily work. They design processes and do their work in a way that prevents global friction later on as the business grows into new markets.


Integrate Cross-Border Experience into Your Business

Here are some practical ways to make sure that your company is recruiting adaptive people with a growth mindset and cross-cultural experiences:


Invest in mobility and immigration expertise. Often, candidates who have immigrated might require additional support to ensure compliance with laws and regulations, especially where visas and work requirements are concerned. Make sure your legal team can support you with the ability to advise on the specifics in this area.


Add international or cross-cultural experience to your recruiting priorities. Clearly explain your priorities to your recruiting team. They can help add international experience as a desired quality in job descriptions, screening tools, and so on. You can also tell them to look for people who were born in your home country, but spent a good part of their lives living abroad or have other cross-cultural experience.


Flag people who know multiple languages. It’s not always easy to tell if someone came to your country from another just by looking at their resume, especially if they obtained higher education once they got here. Professional profiles, such as LinkedIn, enable you to filter by language to quickly find people with international experience. Also, consider adding language expertise to your existing systems, so that you can identify employees who might already have this without their managers being aware.


Keep an eye out for candidates with an adaptive mindset. You don’t have to be an immigrant to demonstrate many of the qualities that make immigrants successful in business. Give consideration to employees who don’t shy away from change and have a track record of choosing the foreign over the familiar. Look for people who have made major career pivots, have overcome unusual or significant challenges, or otherwise show signs of willingness to explore uncharted territory while adapting and thriving in the process.


Encourage employees to obtain international experience. If you have offices outside your home country, consider creating incentives for employees to spend more time in those offices. The expenses can add up, but nothing replaces the value of living and working in another country, no matter how long, to help them contribute in a more meaningful way to your business, especially if international business is a key part of fueling your overall global growth.




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2018 07:00

To Get Your Team to Use Data, Demystify It

MEHAU KULYK/SCIENCE PHOTO LIBRARY/Getty Images

For some of your team members, the idea of using data to inform decision-making can feel intimidating. Maybe they don’t consider themselves to have strong analytical skills. Maybe they felt overwhelmed by their statistics course in college. Maybe they like to “go with their gut,” or simply dread the idea of wading through a ton of data. But it doesn’t have to be that way. If you can show your team that there are simple, straightforward ways to make a big impact with data, it will go a long way toward getting your employees to use data more often in their day-to-day decision-making.


Consider three examples. The first involves Billy Beane, front office executive for the Oakland Athletics and the subject of Michael Lewis’s book Moneyball, who transformed baseball using data. He didn’t do it using fancy new math, or even sophisticated statistical work. He did it by asking an important question: What kinds of players in the Major League draft typically go on to have the most successful professional careers? He used years of data to answer that question, and then drafted players with those attributes (e.g. those who were playing in college, drawing lots of walks, and so on).


Beane’s insight was not some kind of arcane statistical manipulation. It was much simpler. He recognized that he could predict who would succeed in the Major Leagues by studying who had succeeded in the Major Leagues. That’s just exploiting a pattern. In the same vein, the logic behind Moneyball can be applied to any business — there’s enormous potential to use data more powerfully without spending years studying statistics or using complicated algorithms. The essence of “big data” is much simpler: Ask an important question, find the data that might offer an answer, and figure out the pattern.


Example two is out of law enforcement. When police in Santa Cruz, California, claimed that they had “solved a crime before it happened,” it was not some futuristic, Orwellian crime fighting strategy. It was just a pattern. The Santa Cruz police used crime data to determine when and where crimes were happening most often. Then they sent more officers to these locations. One of these spots was a parking garage where there had been a large number of break-ins. Officers spotted two suspicious-looking women lurking near a car. One of the women was wanted on an outstanding warrant; the other was carrying drugs. Police arrested them both — ostensibly before they broke into the car.


Did the police really solve a crime before it happened? The question misses the point. The Santa Cruz police used data to spot crime patterns and then sent officers to the places where they would have the most impact. That’s not mathematical genius; it’s just clever use of data.


Insight Center



Scaling Your Team’s Data Skills
Sponsored by Splunk

Help your employees be more data-savvy.



Lastly, when the retailer Target wanted a tool for reaching pregnant shoppers (who tend to develop strong retail loyalties during pregnancy), analysts developed a “pregnancy prediction index.” This was neither as hard nor as intrusive as it would appear. Target already had the relevant data. The retailer has a baby gift registry for expectant mothers — women who had effectively told Target not only that they were pregnant, but when they were due. Analysts studied the shopping habits of these pregnant women  to discern what products they were more likely to buy than non-pregnant customers: baby wipes, unscented lotions, vitamins, and a handful of other products, some more obvious than others. The next step was just a logical leap: Women who begin buying these products are likely pregnant and can be targeted (pun intended) for pregnancy-related products and services. That is clever business, not statistical wizardry.


Of course, Target faced significant blowback when their pregnancy prediction index figured out that a high school girl was pregnant before her father did. (A series of pregnancy-related coupons from Target prompted the dad to ask some pointed questions of his daughter, according to a New York Times story on Target’s data analytics.) This is a good time to remind your team that big data requires judgement, too. Some patterns are better left private.


Most of the time, however, customers benefit enormously from well-targeted information. We want recommendations for products we are likely to enjoy, discounts for services we use, and customer service that has been refined by constant feedback. And your employees have the power to deliver these benefits, even if they consider themselves “non-math types.”


That’s because the revolution in data analysis in the last 15 years has been made possible by three things: digital data, cheap computing power, and connectivity. Fifty years ago, baseball teams had loads of statistics on player performance — but it was written in pencil in binders filed away in dank storerooms. The same was true with crime data and credit card receipts and customer satisfaction surveys. We had the information — but there was no easy way to compile and analyze it. The patterns were there. We just couldn’t see them, at least not cheaply or easily.


Then along came the personal computer, digitalization, and the internet. Suddenly, we could suck all that information out of basement storerooms and moldy ledgers and see the patterns lurking there — free, and within seconds. Once data became more valuable, we began collecting more of it: with loyalty programs, on social media, from scanner data, and so on.


The gating factor now is imaginative questions, not proper computations. Anyone can learn to ask great questions. Take this one: What kinds of people turn out to be the best at sales? That’s just the Billy Beane question again, only for sales rather than for baseball. You can use data to identify the attributes that define top performers, and then hire people with those attributes.


To be clear, it’s hard to measure and quantify important skills like “listening”; the data must be collected over a long enough period to separate luck from skill; and so on. Still, the process of putting data against questions can burst myths and overcome stereotypes. This was one of Billy Beane’s first insights. His scouts were looking for talent based on “rules of thumb” that weren’t borne out by the data. For example, they were enamored of pitchers who could throw superfast, even as decades of data showed that accuracy matters more.


There are a few other caveats to keep in mind as well. First, big data tends to produce patterns, but it is not deterministic. Billy Beane is going to draft some duds; not every customer buying baby wipes and unscented lotion is pregnant.


Second, all data are inherently backward-looking. By definition, they come from the past. Because of this, data analytics will miss inflection points. Customers cannot provide meaningful feedback on a product they can’t imagine.


Third, sloppy thinking is just as dangerous with data as it is without — maybe even more so. Yes, customers who call a complaint line report low levels of satisfaction with the service they get because it’s the complaint line. The right questions to ask are: 1) Are customers more satisfied (even if still angry) at the end of the call than they were at the beginning; 2) Which employees have the most success in improving customer satisfaction? and 3) What techniques do those successful employees use?


It’s true that basic statistics are what gives power to the patterns; knowledge of basic statistics is an important skill to have. Still, I would rather teach a savvy marketing person how to do basic data analytics than try to get a statistician to think about improving the customer experience. Interesting answers are out there. People who care about those answers just need to go looking for them, maybe with a little bit of prodding. The easiest way to get all of your employees excited about using data is to demystify what is actually going on.




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2018 06:00

The Role of a Manager Has to Change in 5 Key Ways

pchyburrs/Getty Images

“First, let’s fire all the managers” said Gary Hamel almost seven years ago in Harvard Business Review. “Think of the countless hours that team leaders, department heads, and vice presidents devote to supervising the work of others.”


Today, we believe that the problem in most organizations isn’t simply that management is inefficient, it’s that the role and purpose of a “manager” haven’t kept pace with what’s needed.


For almost 100 years, management has been associated with the five basic functions outlined by management theorist Henri Fayol: planning, organizing, staffing, directing, and controlling.


These have become the default dimensions of a manager. But they relate to pursuing a fixed target in a stable landscape. Take away the stability of the landscape, and one needs to start thinking about the fluidity of the target. This is what’s happening today, and managers must move away from the friendly confines of these five tasks. To help organizations meet today’s challenges, managers must move from:


Directive to instructive: When robots driven by artificial intelligence (AI) do more tasks like finish construction or help legal professionals more efficiently manage invoices, there will be no need for a supervisor to direct people doing such work. This is already happening in many industries — workers are being replaced with robots, especially for work that is more manual than mental, more repetitive than creative.


What will be needed from managers is to think differently about the future in order to shape the impact AI will have on their industry. This means spending more time exploring the implications of AI, helping others extend their own frontiers of knowledge, and learning through experimentation to develop new practices.


Jack Ma, co-founder of the Alibaba Group in China, recently said, “Everything we teach should be different from machines. If we do not change the way we teach, 30 years from now we will be in trouble.” Ma is referring to education in the broadest sense, but his point is spot on. Learning, not knowledge, will power organizations into the future; and the central champion of learning should be the manager.


Restrictive to expansive: Too many managers micromanage. They don’t delegate or let direct reports make decisions, and they needlessly monitor other people’s work. This tendency restricts employees’ ability to develop their thinking and decision making — exactly what is needed to help organizations remain competitive.


Managers today need to draw out everyone’s best thinking. This means encouraging people to learn about competitors old and new, and to think about the ways in which the marketplace is unfolding.


Exclusive to inclusive: Too many managers believe they are smart enough to make all the decisions without the aid of anyone else. To them, the proverbial buck always stops at their desks. Yet, it has been our experience that when facing new situations, the best managers create leadership circles, or groups of peers from across the firm, to gain more perspective about problems and solutions.


Managers need to be bringing a diverse set of thinking styles to bear on the challenges they face. Truly breakaway thinking gets its spark from the playful experimentation of many people exchanging their views, integrating their experiences, and imagining different futures.


Repetitive to innovative: Managers often encourage predictability — they want things nailed down, systems in place, and existing performance measures high. That way, the operation can be fully justifiable, one that runs the same way year in and out. The problem with this mode is it leads managers to focus only on what they know — on perpetuating the status quo — at the expense of what is possible.


Organizations need managers to think much more about innovating beyond the status quo – and not just in the face of challenges. Idris Mootee, CEO of Idea Couture Inc., could not have said it better: “When a company is expanding, when a manager starts saying ‘our firm is doing great’, or when a business is featured on the cover of a national magazine – that’s when it’s time to start thinking. When companies are under the gun and things are falling apart, it is not hard to find compelling reasons to change. Companies need to learn that their successes should not distract them from innovation. The best time to innovate is all the time.”


Problem solver to challenger: Solving problems is never a substitute for growing a business. Many managers have told us that their number one job is “putting out fires,” fixing the problems that have naturally arisen from operating the business. We don’t think that should be the only job of today’s manager. Rather, the role calls for finding better ways to operate the firm — by challenging people to discover new and better ways to grow, and by reimagining the best of what’s been done before. This requires practicing more reflection — to understand what challenges to pursue, and how one tends to think about and respond to those challenges.


Employer to entrepreneur: Many jobs devolve into trying to please one’s supervisor. The emphasis on customers, competitors, innovations, marketplace trends, and organizational performance morphs too easily into what the manager wants done today — and how he or she wants it done. Anyone who has worked for “a boss” probably knows the feeling.


The job of a manager must be permanently recast from an employer to an entrepreneur. Being entrepreneurial is a mode of thinking, one that can help us see things we normally overlook and do things we normally avoid. Thinking like an entrepreneur simply means to expand your perception and increase your action — both of which are important for finding new gateways for development. And this would make organizations more future facing — more vibrant, alert, playful — and open to the perpetual novelty it brings.


We want managers to become truly human again: to be people who love to learn and love to teach, who liberate and innovate, who include others in the process of thinking imaginatively, and who challenge everyone around them to create a better business and a better world. This will ensure that organizations do more than simply update old ways of doing things with new technology, and find ways to do entirely new things going forward.




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2018 05:05

October 25, 2018

Why Leaders Don’t Embrace the Skills They’ll Need for the Future

amriphoto/Getty Images

Innovative. Agile. Collaborative. Bold.


Over the past year, we’ve been struck by how many times we’ve heard C-suite leaders use these words, or very similar ones, to describe the strengths they believe are critical to transforming their businesses, and to competing effectively in a disruptive era.


What’s equally striking is how difficult organizations are finding it to embed these qualities and behaviors in their people. That’s because the primary obstacle is invisible: the internal resistance that all human beings experience, often unconsciously, when they’re asked to make a significant change. Cognitively, it shows up as mindset — fixed beliefs and assumptions about what will make us successful and what won’t. Emotionally, it usually takes the form of fear.


The complexity of the challenges that organizations face is running far out ahead of the complexity of the thinking required to address them. Consider the story of the consultant brought in by the CEO to help solve a specific problem: the company is too centralized in its decision making. The consultant has a solution: decentralize. Empower more people to make decisions. And so it is done, with great effort and at great expense. Two years pass, the company is still struggling, and a new CEO brings in a new consultant. We have a problem, the CEO explains. We’re too decentralized. You can guess the solution.


The primary challenge most large companies now face is disruption, the response to which requires a new strategy, new processes, and a new set of behaviors. But if employees have long been valued and rewarded for behaviors such as practicality, consistency, self-reliance, and prudence, why wouldn’t they find it uncomfortable to suddenly embrace behaviors such as innovation, agility, collaboration, and boldness?


When we feel uncomfortable or stressed, we tend to double down on what has worked for us before. Overusing any quality will eventually turn into a liability. Too much prudence congeals into timidity. Overemphasizing practicality stifles imagination. Consistency turns into predictability.


Most of us tend to view opposites in a binary way. If one is good, the other must be bad. Through this lens, the only alternative to prudence is recklessness. If you’re not being practical, you’re being unrealistic. Both invite failure. Also, if you value a quality such as prudence, it’s easy to confuse its positive opposite — boldness — with its negative opposite — recklessness, which is precisely what prudence is designed to protect against.


What we don’t see is that it makes more sense to balance practicality and innovation, boldness and prudence, collaborativeness and self-reliance, agility and consistency — without choosing sides between them.


But it’s difficult to balance these qualities. We see this play out over and over in two contrasting styles of leadership. The challenging leader constantly pushes his people to stretch and grow, but under stress, he can be overwhelming, and even brutal. The caring leader makes people feel safe and valued, but may resist pushing them beyond their current comfort zones, and doesn’t always hold them accountable. The challenging leader tends to confuse caring with coddling, while the caring leader may feel challenging people is tantamount to cruelty.


This same phenomenon operates not just individually, but also organizationally. We worked with a venture capital firm that took pride in differentiating itself from competitors by building its culture around collegiality, care, and consensus. Sure enough, all voices were heard around decisions, and employees treated one another with consideration and respect. The problem was that these qualities were so overused that they prompted paralysis in decision making and an aversion to providing honest feedback, leaving employees feeling uneasy about where they stood.


It isn’t possible to truly transform a business without simultaneously transforming its people. This requires understanding and exploring the complex factors, both cognitive and emotional, that drive their behavior. Attention to people’s inner lives is rare for most companies, but we’ve found several moves that help make it possible.



Embrace intermittent discomfort. Our shared human instinct is to avoid pain at any cost, but growth requires pushing past our current comfort zone. To strengthen a bicep, it’s necessary to lift weight repetitively nearly to muscle failure. That’s what signals the brain to build more muscle fiber. The same is true of challenging ourselves to become more resilient emotionally, and less rigid and habitual cognitively.
Focus first on building the muscle of self-observation, individually, and collectively across the organization. Self-observation is the capacity to step back from our thoughts and emotions under duress. We refer to this as the “Golden Rule of Triggers”: Whatever you feel compelled to do, don’t. Instead observe your internal experience with curiosity and detachment, as you might the action in a movie, or the behavior of strangers.  Rather than reacting, take a deep breath, and then ask yourself “How would I behave here at my best?”
Design small, time-limited tests of the assumptions you hold about the negative consequences you imagine if you build a specific new behavior. Does setting aside specific times to think creatively and reflectively truly prevent you from getting urgent work accomplished, or might it lead to new ideas, more efficiency, and better prioritization? Does going out of your way to be appreciative require that you give up your high standards? Conversely, does providing tough feedback in real time have to feel unkind, or can it be delivered honestly as encouragement to grow?

Einstein was right that “we can’t solve our problems from the same level of thinking that created them.” Human development is about progressively seeing more. Learning to embrace our own complexity is what makes it possible to manage more complexity.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2018 11:00

HR Leaders Need Stronger Data Skills

Peter Dazeley/Getty Images

An old saying sums up the data skills of most HR professionals: “The shoemaker’s children go barefoot.”


In today’s tightening labor market, HR leaders must work relentlessly to develop and recruit people who advance digital transformation across their organizations. Yet most have struggled to advance their own digital competencies. This neglect has hindered their ability to leverage data into talent strategies that can help transform their businesses.


We base this claim about HR’s digital skills gap on the results of our latest global leadership survey. Co-produced by our three organizations, the survey gauged nearly 28,000 business leaders across industries about the state and trajectory of leadership. Among the findings: On average, HR leaders lag far behind other professionals in their ability to operate in a highly digital environment and use data to guide business decisions.


It comes as no surprise that this skills gap has spurred a credibility gap between HR professionals and their colleagues. Only 11% of business leaders trust HR to use data to anticipate and help them fill their talent needs. When we last fielded the same survey three years prior, 20% of business leaders felt that way — still a low number, but nearly twice what it is today.


Finding ways to improve HR’s digital acumen and data skills can challenge even the most well-resourced companies. HR leaders can start by upskilling their teams in areas that impact two critical business outcomes: building bench strength and tying HR metrics to financial success. To achieve both, companies can support their HR leaders in taking these steps:


Forge internal partnerships. At most companies, other departments use data and technology in ways that HR could apply to their own work. For example, HR can work with marketing for guidance on search engine optimization (SEO), a skill that can help HR improve its recruitment efforts. They can also consider partnering with colleagues proficient in finance technology for guidance about blockchain, a technology capable of transforming how HR stores and verifies private employee data. Such internal collaborations may not only help HR attain new skills, but also help to foster a data-driven culture across the organization.


Map talent analytics to business outcomes. HR should learn how to tie its data about people to performance and business outcomes. This process must begin with gathering data about the skills, capabilities, and behaviors of the existing leaders and workforce, often done through assessments. For example, a hospital seeking to improve patient safety might look to HR to discover that the highest rates of patient safety are tied to nurse units where supervisors showed specific behaviors, such as demonstrating empathy. By collecting data on employee skills and experience and tying it to business outcomes, HR can highlight key areas of risk and opportunity for the company.


Insight Center



Scaling Your Team’s Data Skills
Sponsored by Splunk

Help your employees be more data-savvy.



Develop data visualization skills. Simply collecting data and analyses won’t help HR leaders advance their efforts unless they know how to leverage that data to influence others. One study found that when presenters supplemented their stories with visuals, audience members had around a 40% greater likelihood of taking the desired course of action versus those who received non-visual presentations. As such, HR should learn how to create graphical presentations of data. HR needs to get more proficient with sophisticated software programs such as Power BI, Tableau, or R Studio, all of which give visual context to data.


Implement leadership planning models. Beyond using data to highlight current talent trends and gaps, HR should use it to fuel predictions about future talent needs, especially for leadership positions. HR professionals should employ leadership planning models to map a business’s long-term strategic plan to the leaders it will need to implement that plan. Leadership planning models enable HR to create data-driven projections for the quantity of leaders needed, the skills they will require, and where they will be located. On an ongoing basis, these models can compare the leadership talent it has against what it needs. As such, HR can course-correct when necessary by revising or shifting its priorities among hiring, development, and performance-management systems.


Taking these four initial steps can yield big dividends. Our research shows that companies excelling in using data and analytics to drive their talent strategy are more than six times more likely to have a strong leadership bench. Moreover, those with the strongest digital leadership capabilities outperform their peers by 50% in a financial composite of earnings and revenue growth.


And when HR executives use their digital savviness to advance their companies, they often move up themselves as a result. We found that HR professionals who leverage advanced analytics are over six times more likely to have opportunities to climb the corporate ladder.


Today, unemployment stands at the lowest level in nearly five decades. As the economy continues growing and Baby Boomers retire in droves, the labor market will further tighten and increase the pressure on HR. These demographic and economic dynamics will push HR to be better, faster, and smarter about how it finds and develops the talent their organizations will need to execute their business strategy. Investing in developing HR’s data and technology skills should be a top priority if companies want to win the war for talent.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2018 10:00

How a Pharma Company Applied Machine Learning to Patient Data

Liyao Xie/Getty Images

The growing availability of real-world data has generated tremendous excitement in health care. By some estimates, health data volumes are increasing by 48% annually, and the last decade has seen a boom in the collection and aggregation of this information. Among these data, electronic health records (EHRs) offer one of the biggest opportunities to produce novel insights and disrupt the current understanding of patient care.


But analyzing the EHR data requires tools that can process vast amounts of data in short order. Enter artificial intelligence and, more specifically, machine learning, which is already disrupting fields such as drug discovery and medical imaging but only just beginning to scratch the surface of the possible in health care.


Let’s look at the case of a pharmaceutical company we worked with. It applied machine learning to EHR and other data to study the characteristics or triggers that presage the need for patients with a type of non-Hodgkin’s lymphoma to transition to a later line of therapy. The company wanted to better understand the clinical progression of the disease and what treatment best suits patients at each stage of it. The company’s story highlights three guiding principles other pharma companies can use to successfully deploy advanced analytics in their own organizations.


Insight Center



The Future of Health Care
Sponsored by Medtronic

Creating better outcomes at reduced cost.



Generating meaningful hypotheses (and organizational buy-in) requires engaging the right stakeholders. While the impulse might be to rush straight to the data and begin analysis, a critical preliminary step is to lay out the key business questions that must be answered and generate hypotheses. Building a comprehensive list of addressable hypotheses will allow the analytics team to determine which types of data will be necessary to test and prove (or disprove) the hypotheses.


It’s important to pull in the perspectives of key stakeholders on functional teams across the business to ensure hypotheses incorporate the right expertise and provide the highest value to the company. This also helps build buy-in and trust in analytics.


In this case, the pharma company brought in teams from its brand, medical, and business intelligence groups to generate hypotheses on the likely predictors that patients would have to move from one therapy to another and the triggers of those transitions. For example, in trying to hypothesize what drives fast or slow disease progression, the medical group contributed its clinical understanding of the disease, the brand team offered its detailed understanding of the company’s treatment offerings and how physicians use them, and the business intelligence team presented the analytical methods and datasets it had already used to shape the current understanding of treatment and disease courses.


The best data set might be a combination of data sets. It’s critical to identify a data set that is extensive and rich enough to properly train a machine learning algorithm. This is especially true in oncology, where a large number of variables — including age, gender, diagnosis history, medication and treatment history, laboratory values, and hospital encounters — collected on many patients over a sufficiently long historical stretch are needed for an effective analysis.


The pharma company’s analytics group realized that its internal data didn’t capture the variables likely to predict patient transitions in sufficient depth. The group therefore pursued a strategy in which it used internal and external data, combining an oncology-specific, integrated, structured EHR data set with some analysis replicated and validated on claims data.


All the data were stitched together and fed into an automated-feature-discovery (AFD) machine learning engine that allowed the company to test millions of hypotheses within hours. The engine explored every possible variation of the patient data to see if any variables had a statistically significant correlation with the transition to a later line of therapy. The insights gleaned from subject-matter experts helped ensure that the AFD results were clinically relevant. For example, when results indicated that an elevated liver function marker correlated with disease progression, medical officers confirmed that, although it wasn’t a factor they’d previously considered, it was clinically possible.


Feedback loops (many times over) are the key to great results. An iterative test-and-learn process is critical to developing an accurate model. The pharma company’s analytics group tested more than 200 lab values, major chronic comorbidities, and elements of medical history. Machine learning helped identify and isolate the critical variable combinations that predict transitions. Models were validated and refined to avoid noise and reduce the number of variables.


After weeks of iteratively learning and validating, a model was successfully developed to predict progression from initial diagnosis to later lines of therapy. Specifically, machine learning was used to extract features and triggers from the patient’s treatment, lab, and medication history, and the validated features were used to score and rank patients by expected likelihood of transition.


The models uncovered many critical insights, including:



Abnormalities in select lab results, such as the elevated liver function marker, increased the likelihood of a patient transitioning to the next line of therapy by in as much as 140% in some cases.
Patients on maintenance therapy were 20% less likely to transition to the next line of therapy.

***


With the right data, organizational processes, and clinical knowledge applied, machine learning and artificial intelligence can make a significant difference in pharma and health care today despite some limitations that still exist. It can, for example, be difficult to understanding why some complex models come to their conclusions and labeling the massive datasets required for the hungriest models can be haltingly laborious.


However, limitations like these are currently being addressed, with techniques like LIME (local-interpretable-model-agnostic explanations) helping to show model reasoning, and efforts are underway to use machine learning itself to label datasets. As limitations lift, the opportunities for pharma and health care will greatly expand. Those companies that have already begun leveraging machine learning will have the established base of infrastructure and processes needed to take advantage of these opportunities.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2018 09:00

What’s Driving Superstar Companies, Industries, and Cities

Apexphotos/Getty Images

The debate about superstar firms and superstar effects has been intensifying, partly in response to the rapid growth of global US tech companies. However, scratch the surface and the superstar phenomenon may not be quite what it seems. Wider dynamics may be at play.


In our recent research at the McKinsey Global Institute, we examined the superstar phenomenon across firms, as well as sectors and cities. We define superstar to mean a firm, sector, or city that has a substantially greater share of income than peers and is pulling away from those peers over time.  Yes, we found that a superstar dynamic is occurring for firms, cities and, to a lesser extent, sectors. In this article, we will focus mostly on firms, but with some brief commentary on sectors and cities at the end.


We analyzed nearly 6,000 of the world’s largest public and private firms with annual revenues above $1 billion. These firms make up two thirds of global corporate pretax earnings (EBTDA) and revenues. To analyze the superstar dynamics of firms, our metric was economic profit, a measure of a firm’s profit above and beyond opportunity cost. (To do this, we take the firm’s returns, deduct the cost of capital, and multiply by the firm’s total invested capital.) We focus on economic profit rather than revenue size, market share, or productivity growth because these other metrics risk including firms that are simply large and may not create economic value.


The top 10% of the firms we analyzed — the superstars by our metric — create 80% of all the economic value in our sample, meaning they account for 80% of the economic profits created by firms above a billion dollars of revenue. The top 1% accounts for 36% of all the economic value created by public and private corporations worldwide in this size range. The bottom 10% destroy roughly as much economic value as the superstar firms create. The distribution of economic value is also getting more skewed over time, and at both ends. Superstar firms create 1.6 times more economic profit on average today compared to 20 years ago. But this is also mirrored by firms in the bottom 10%, which account for 1.5 times more economic loss today than 20 years ago.


Contrary to popular perception, these superstar firms are not just Silicon Valley tech giants. They come from all regions and sectors and include global banks and manufacturing companies, long-standing Western consumer brands, and fast-growing U.S. and Chinese tech firms. In fact, both the sectoral and the geographic diversity of superstar firms is greater today than 20 years ago. The superstars tend to be more involved in global flows of trade and finance, more digitally mature, and they dominate the lists of the most valued companies, the most valued brands, the most desirable places to work, and the most innovative companies.


But uneasy should lie the head that wears the crown: Nearly half of superstar firms are displaced from the superstar top decile in every business cycle. Among the top 1% today, two-thirds of firms are new entrants that were not in the top 1% a decade ago. The high degree of churn among superstar firms cuts both ways: when superstar firms fall, 40% of them fall to the bottom decile with large economic losses; at the same time many firms have also risen from the bottom decile, in some cases all the way to the top. The rate of churn at the top has remained the same over the last 20 years.


A few key characteristics distinguish superstar firms from the rest, that perhaps others could learn from. They spend 2-3 times more on intangible capital such as R&D, have higher shares of foreign revenue, and rely more on acquisitions and inorganic growth than median firms. The greater economic profit and loss at both ends of the distribution is driven by greater scale and invested capital, not by increasing returns to capital. Some bottom-decile firms share many of these characteristics, such as size and even investments, suggesting that size alone is not sufficient; what sets superstar firms apart is their ability to select and execute on their bold investments well.


Superstar dynamics go beyond firms and can be observed among cities too, and to a lesser extent among sectors. We find that a handful of sectors account for 70% of value added and surplus across the G-20 group of major economies. These “superstar” sectors include financial services such as banking, insurance, and asset management, professional services, internet and software, real estate, and pharmaceuticals and medical products. The disproportionate gains to these sectors is in contrast to the previous 15-20 years when gains in surplus and value added were more widely distributed across sectors of activity. Today’s superstar sectors tend to have higher R&D intensity, higher skill intensity and lower capital and labor intensity than other sectors. The higher returns in superstar sectors accrue more to corporate surplus more than labor and flow to intangible capital such as software, patents, and brands.


For cities, we analyze nearly 3,000 of the world’s largest cities by population that together account for 67% of global GDP. Using our metric of GDP and personal income per capita, we identify 50 top superstar cities. They include cities such as Boston, Frankfurt, London, Manila, Mexico City, Mumbai, New York, Sao Paulo, Sydney, Tianjin, and Wuhan. These 50 cities account for 8% of global population, 21% of world GDP, 37% of urban high-income households, and 45% of headquarters of firms with more than $1 billion in annual revenue. The average GDP per capita in these cities is 45% higher than that of peers in the same region and income group, and this gap has grown over the past decade. The churn rate of superstar cities is half that of superstar firms. Often when superstar cities fall, they tend to be advanced economy cities, replaced by a developing economy city.


The link between superstar firms, sectors, and cities is complex. Some superstar firms benefit from being in “superstar” sectors of activity, particularly those in which value-added gains go to gross operating surplus (an economic measure that represents the income earned by capital). Yet many superstar firms endure even as their sector sees declining shares of value added and surplus. “Superstar” sectors’ gains tend to be more geographically concentrated, and mostly in large cities, many of which are superstar cities. For instance, gains to internet, media, and software activities are captured by just 10% of U.S. counties, which account for 90% of GDP in that sector. These cities see faster income growth than population growth, resulting in demand for high-skill, high-wage workers and limited supply—an escalating war for talent. Superstar firms and sectors also create strong wealth effects for investors, asset managers, and home owners, and these wealth effects are also concentrated among superstar cities.


While more research needs to be done to understand the full implications of superstars in the global economy, we believe enough evidence exists to give corporate decision makers some food for thought. Superstar status remains contestable, it’s easy to fall from the top, and possible to rise — even from bottom all the way to the top. Size matters, but it is not enough; value creation matters more than size for its own sake. Productivity can help; but it is not enough to achieve superstardom. Being in the right sector and geography can help; but this too can be overcome. Acquisitions, bold investment in intangible assets, and attracting talent can ultimately make the difference.


This post has been updated to clarify that the statistics regarding top firms’ share of profits are as a percentage of profits by firms with a billion dollars or more in revenue.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2018 08:41

Research: How Cloud Computing Changed Venture Capital

Atomic Imagery/Getty Images

How has technology changed which deals venture capitalists (VCs) fund and how they fund them?


Venture capitalists essentially invest in startup ‘experiments’, and subsequently provide more funding to the experiments that work, so that they can run more experiments. This leads to many failures (roughly 55% of startups) and a few successes (6% return > 5x the total amount invested).  So an innovation that changes the cost of experiments changes the landscape for venture funding.


In recent research, we examined a particular innovation that had a broad impact on some types of startups but little to no effect on others.  The introduction of cloud computing services in the mid 2000s allowed Internet and web-based startups to avoid large initial capital expenditures and instead “rent” hardware space and other services in small increments, scaling up as demand grew.  Cloud computing made the early ‘experiments’ for these firms significantly cheaper.


We combined data from VentureSource, VentureEconomics, Correlation Ventures, CapitalIQ, LinkedIn, Crunchbase, firm websites, and LexisNexis to examine the funding of startups in the period around the introduction of Amazon Web Services (AWS) in both affected and unaffected sectors. (Thank you to Correlation Ventures for helping us with this study. Both Rhodes-Kropf and Ewens are advisors to and have a financial interest in Correlation Ventures.)


Startups founded in sectors that benefited from the introduction of AWS raised much less capital in their first round of VC financing after AWS. On average, initial funding fell 20% relative to unaffected sectors.  Interestingly, this fall in costs only changed the initial capital required for the startup – total capital raised by firms in affected sectors that survived three or more years was unchanged.  Thus, the effect of cloud services significantly impacted the initial costs of trying an idea, but not the costs of scaling a successful business.


This fall in the cost of starting businesses dramatically impacted the way in which VCs manage their portfolios.  Some VCs shifted toward an approach colloquially referred to as “spray and pray” — VCs ‘sprayed’ money in more directions, and ‘prayed’ rather than governed. In sectors impacted by the technological shock of cloud computing, VCs tended to respond by providing less funding and limited governance to an increased number of startups. These VCs were also more likely to abandon their investments after the first round of funding. In fact, the number of initial investments made per year per VC in affected sectors nearly doubled from the pre-cloud to the post-cloud period relative to unaffected sectors, without a commensurate increase in follow-on investments. In addition, VCs making initial investments in affected sectors were less likely to take a board seat following the technological shock.


These effects arose both because some firms changed the way they invest and because of entry by new VC firms.  Even the most active firms investing both before and after the shift tended to invest smaller amounts in a larger number of deals in sectors affected by cloud computing, relative to unaffected sectors.


This falling cost of experimentation allowed a set of entrepreneurs who would not have been financed in the past to receive early-stage financing — leading to greater democratization of entry into high-tech entrepreneurship.  In affected sectors, VCs increased their investments in startups run by younger, less experienced founding teams.  These firms were subsequently more likely to fail. But among those who earned a second round of funding – a indication of initial success – they had nearly 20% higher increases in value across rounds than equivalent startups in untreated sectors, and ultimate exits generated greater returns. In other words, these companies were more likely to fail, on average, but the ones that succeeded did so more dramatically.


An interesting implication of our results is that VCs seem to have provided less guidance during the earliest part of a startup’s lifecycle — when that guidance is arguably most needed. Moreover, younger and less experienced founders likely need the most mentorship and governance. Yet the VCs’ desire to make a larger number of smaller investment, implies that earlier startups with younger founders only get financed with limited mentorship and governance. This finding helps to explain the rise of new financial intermediaries such as accelerators (which have emerged in the last decade) that provide scalable, lower cost forms of mentorship to inexperienced founding teams. These are a natural response to the gap in value add created by the evolution of VCs’ investment behavior in early rounds to a more passive “spray and pray” investment approach.


Our work shows how a technology shock actually alters the funding landscape, shifting which future ideas get funded and how.  A new innovation may open up a whole new range of opportunities, but also necessitate new ways of funding them.  In the case of the innovation of cloud services, some venture investors moved toward a “spray and pray” investment style placing investments into more long-shot bets.  Thus, our findings suggest that an initial technological innovation increases the pace of future innovation by allowing even greater experimentation.


This continues today. Innovation is decreasing the cost of experiments in areas as diverse as biotech, hardware, and agriculture, resulting in the need for new sources of small amounts of capital such as angel investors. Simultaneously, the falling cost of experimentation explains the anecdotal view that some VCs are waiting longer to invest and expecting startups to have accomplished more. In a market with many more small experiments, some investors will choose to wait to see the outcome before investing more. Innovation that affects the costs of experiments creates a financing market with many smaller investors in less well-funded startups and larger investors who are naturally investing after startup’s achieve commercial traction.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2018 08:00

Marina Gorbis's Blog

Marina Gorbis
Marina Gorbis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marina Gorbis's blog with rss.