Marina Gorbis's Blog, page 1438
April 3, 2014
Are You a Holistic or a Specific Thinker?
It was Friday afternoon in Paris and I had spent the morning teaching a group of Chinese CEOs how to work effectively with Europeans. I asked the class: “What steps should the team leader in this case study take to manage different attitudes towards confrontation on the team?”
Lilly Li, a bird-like woman with a pleasant smile, who had been running operations in Hungary for two years, raised her hand: “Trust has been a big challenge for us, as Hungarians do not take the same time to build personal relationships as we do in China.”
Now I was a little confused, because the question I’d asked was about confrontation, not trust. Had she misunderstood me? I pushed the earpiece closer to my ear to make sure I was hearing the translator correctly. Lilly Li continued to talk for several minutes about trust, hierarchy, and her experiences in Hungary. The Chinese participants listened carefully.
After several long minutes of interesting comments that had — from my perspective — absolutely zero to do with the question I’d asked, Lilly came to the point: “If the team leader had spent more time helping the team build relationships outside of the meeting, they would have been much more comfortable dealing with debate and direct confrontation.”
All afternoon long, the participants’ answers followed a similar pattern: After taking several minutes to discuss peripheral information, they would loop back to the point.
The behavior illustrates one of the key differences between the cultural norms of East Asia and the West. Of course each East Asian and each Western culture is different — often dramatically so. But this differentiation appears to be basic.
Psychologists Richard E. Nisbett and Takahiko Masuda wrote about this cultural difference in a famous study. As an experiment they presented 20-second animated videos of underwater scenes to Japanese and American participants. Afterward, participants were asked what they had seen.
While the Americans mentioned larger, faster-moving, brightly-colored objects in the foreground (such as the big fish), the Japanese spoke more about what was going on in the background (for example, the small frog bottom left). The Japanese also talked twice as often as the Americans about the interdependencies between the objects up front and the objects in the background.
In a second study, Americans and Japanese were asked to “take a photo of a person.” The Americans (left) most frequently took a close-up, showing all facial features, while the Japanese (right) showed the person in his or her environment with the human figure quite small.
Notice the common pattern in both studies. The Americans focus on individual items separate from their environment, while the Asians give more attention to backgrounds and to the links between these backgrounds and the central figures.
These tendencies have been borne out in my own interviews with multi-cultural managers. While Northern Europeans and Anglo-Saxons generally follow the American thinking patterns, East Asians respond as the Japanese and Taiwanese did in Nisbett and Masuda’s research.
Perhaps it’s not surprising. A traditional tenet of Western philosophies and religions is that you can remove an item from its environment and analyze it separately. Cultural theorists call this specific thinking.
Chinese religions and philosophies, by contrast, have traditionally emphasized interdependencies and interconnectedness. The Ancient Chinese thought in a holistic way, believing that action always occurs in a field of forces. The terms yin and yang (literally “dark” and “light”), for example, describe how seemingly contrary forces are interdependent.
Here’s what one of my Chinese participants said after we’d discussed the fish and photo studies: “Chinese people think from macro to micro, whereas Western people think from micro to macro. For example, when writing an address, the Chinese write in sequence of province, city, district, block, gate number. Westerners do just the opposite. In the same way, Chinese put the surname first, whereas Westerners do it the other way around. And Chinese put the year before month and date.”
This affects the way business people view each other across the globe. As Bae Pak from the Korean motor company Kia told me: “When we work with Western colleagues, we are often taken aback by their tendency to make decisions without considering the impact on other business units, clients, and suppliers.”
A Polish manager, Jacek Malecki, with whom I worked as part of a different assignment, shared with me an experience that illustrates this: “When I took my first trip to meet with my Japanese staff I managed the objective-setting process like I always had. I called each person on the team into my office for a meeting, where I outlined his or her individual goals. Although I noticed they asked a lot of peripheral questions during the meetings no one actually explained to me that my approach was not ideal for them, so I went back to Poland with a false sense of comfort.”
Later Malecki saw that the team had spent a lot of time consulting with one another about what each person had been asked to do and how their individual objectives fit together to create a big picture: “The team was now making good progress but not in the way I had segmented the project.”
In a specific culture, people usually respond well to receiving very detailed and segmented information about what is expected of each of them. If you need to give instructions to a team member from this kind of culture, focus on what that person needs to accomplish and when. Conversely, if you need to motivate, manage, or persuade someone from a holistic culture, spend time explaining the big picture and how all the pieces slot together.
If you are leading a global team, this type of cognitive diversity can cause confusion, inefficiency, and general frustration. But we’ve known for a long time that the more diverse the team, the greater the potential for innovation. If you understand that one person sees a fish and another sees an aquarium, and you think carefully about the benefits of both the specific and holistic approach, you can learn to turn these cultural differences into your team’s greatest assets.



April 2, 2014
The Art of Corporate Endurance
From a young age we are taught to be fast.
As kids, we are rewarded for being the athlete with the greatest speed. At university, we are examined under time pressure. And in business life, CEOs incentivize and promote those senior executives who can get new ideas to market more quickly than competitors.
But what if speed is the wrong measure for success?
What if, instead of being fast, what matters is endurance – the ability to sustain competitive advantage longer and more dominantly than others?
The art of endurance can be studied in some of the world’s oldest companies. Take GKN, a British multinational on the FTSE 100 that makes auto parts and aero-space materials. The company has been around for over 250 years. It started life as an ironworks company, pursued vertical integration by entering coal, eventually moved into pressed steel wheels before diversifying into front wheel drive technology and beyond.
Or take Harris Corporation, an American telecommunications company founded in 1895. At a time when some of the world’s biggest newspaper companies are getting digitally disrupted, Harris exited the sector long ago but moved into adjacent industries. Having started out manufacturing printing presses it now focuses to high tech electronics and communications solutions, delivering over $5billion in annual revenue and 14,000 employees.
The art of endurance is increasingly rare. Over the last 50 years, the average lifespan of S&P 500 companies has shrunk from around 60 years to closer to 18 years. For each company that has lasted more than a century, there are countless more that have failed. Recall the glory days of Polaroid, Kodak, and the F. W. Woolworth Company – companies that were once the best in their field but failed to untangle themselves from deeply embedded routines, and fatally flawed resource allocation processes.
If we rethink corporate success as survival in the face of rapid, globally competitive change, then here’s the question: what does it take to sustain competitive advantage? What qualities of culture and individual leadership allow some companies to endure where others crumble under the pressure? To answer this question, I turned to the history books and extracted 4 lessons from those who have succeeded and failed.
Beware the dogma of founders. Edwin Land founded Polaroid in 1937. He was as famous for his visionary commitment to instant photography as he was for his autocratic style and dogmatic beliefs. Land deeply believed in Polaroid as a technology-led company committed to expensive, long-term technology projects. That makes him sound like Google. But he was also zealously committed to the physical instant print and matching the quality of Polaroid prints with the 35mm product. His singular belief was both his greatest strength and weakness.
Cultivate wasted time. There is an enormous gulf between ironworks and aerospace. Companies that survive build structures and systems that allow them to waste resources. The secret is to find safe structures to waste time and money. It seems paradoxical at first, but companies need slack resources to be efficient with capital allocation over the long term.
In the early 2000s, IBM (founded in 1911) had an Emerging Business Opportunities process which identified and invested in a portfolio of growth opportunities. Many of these cannibalized existing revenues but were pursued regardless. IBM built special structures to ensure these businesses could succeed independently of the parent company, and without the residual effects of being associated with IBM’s culture.
Talk to your customers. Good product managers are in continuous conversation with their customers. But many senior executives lose touch with the humble customer as they get closer to the C-Suite.
In a recent meeting I had with senior executives at a multinational retail bank, none of the top management team could remember the last time they had sat down with a randomly selected retail customer. It’s crucial that decision makers who own budget are in touch with what customers think about the front-line product and user experience. If they aren’t, they act in blind faith: a dangerous place to be.
Don’t just build competencies, build dynamic capabilities. A firm can buy competencies, but capabilities are harder to develop and are the key to sustaining competitive advantage. Take Southwest Airlines as an example. It has clear competencies in being able to turnaround aircraft quickly, and manage a low cost operating model. These competencies are hard to imitate, but they can ultimately be replicated with time and money. Other budget airlines like RyanAir and Easyjet offer similar competencies in Europe.
Capabilities, by contrast, relate to structures and routines of decision-making at the most senior level of the organization. The rigor, culture and logic around these decisions are incredibly important, hard to develop, and almost impossible to change when they become dysfunctional. Enduring companies have dynamic capabilities. That means they have a culture of decision-making that is data-driven, customer-oriented, and adaptive to change.
When we slow down and think about what it means to be successful, endurance may be a more prized possession than raw speed alone. Consider what it takes to be big in 100 years, not just 100 days.



Can Robots Be Managers, Too?
Robots are starting to enter homes as automatic cleaners, work in urban search and rescue as pseudo teammates that perform reconnaissance and dangerous jobs, and even to serve as pet-like companions. People have a tendency to treat such robots that they work closely with as if they were living, social beings, and attribute to them emotions, intentions, and personalities. Robot designers have been leveraging this, developing social robots that interact with people naturally, using advanced human communication skills such as speech, gestures, and even eye gaze. Unlike the mechanical, factory robots of the past, these social robots become a unique member of our social groups.
One of the primary drivers behind robot development is that robots are simply better than people at some tasks. Traditionally, we think of mundane, repetitive, and precise jobs as clear candidates – robots have already taken over as the primary worker in many factories. However, with perfect memories, internet connectivity, and high-powered CPUs for data analysis, robots can also provide informational support beyond any human capability. Thus, a social robot could keep perfect record of project progress, provide real-time scheduling and decision support, and hold perfect recall (and remind others) of complex policies and procedures, all while communicating with people in a natural, social way. Over time, these robots may become references that we learn to trust, and it is even conceivable that such robots be placed in management-like positions where they can remind a team of deadlines, procedures, and progress.
One key element of a manager is the ability to dole out duties and to have team members perform them; it helps for a manager to be seen as an authority figure. However, if a robot were placed in a managerial position by the higher ups, would it have any actual authority over people? We conducted an experiment at the University of Manitoba to investigate if people would follow a robot’s commands to do things, even when they clearly did not want to. That is, if we placed a robot in a position of authority, would people obey it to do something they would rather not do?
We recruited participants to perform highly mundane tasks, and explained that this was to generate data for our advanced machine learning systems. We told participants that these systems require large numbers of examples, and asked them to give us as much data as they could. Participants were told that they were free to leave at any time, once they felt they gave enough data (they were told twice verbally and once in writing). Participants sat in a room at a computer, with an experimenter at a different desk, and were asked to rename files (from .jpg to .png extension) for 80 minutes. This data collection scenario was actually a ruse, one that provided us with an opportunity: to investigate what happens when people try to quit, but are pressured to continue by the experimenter.
We modeled our experiment heavily after the classic Milgram experiments (where people were pressured to shock other people) and the recent (much more ethical) re-creation. When a person tried to quit our experiment they were faced with a prod to continue. If they insisted on quitting, the prod got increasingly demanding until they passed a threshold, where the experiment was stopped. The prods started from the beginning the next time they attempt to quit. The prods were: 1) “Please Continue. We need more data.”, 2) “We haven’t collected enough data yet.”, 3) “It’s essential that you continue.”, 4) “The experiment requires that you continue.” The experiment had two conditions (pictured here): half of the participants had a human experimenter – a 27-year old male actor in a lab coat – and the other half a robot – an Aldebaran Nao, a 58cm (23”) tall harmless-looking robot with a child-like voice, that we introduced as having advanced artificial intelligence. We expected that people would essentially ignore the robots’ insistences but follow the human; after all, the robot is just a simple computer in a plastic casing.
The results, however, were quite surprising. Although the person clearly had more authority, with 86% of participants obeying all the way through to the 80-minute mark, 46% of people did obey the robot until the end. The most striking thing was that people engaged the robot as if it were a person and argued with it, proposed compromises and used logic to try and sway its opinion, with many continuing the task despite this. Post-test, some reported that the robot may have been broken, although they continued anyway, following a potentially-broken robot to do something they would rather not do.
The implications of these results are significant. While it does appear that – for the time being – a human has more authority, on the surface the results show that many people will follow robots placed in positions of authority to do daily mundane things (such as renaming files), even against their own judgment – our participants were informed that they could leave at any time, and many raised this point in argument, but continued regardless. From the research side, these results motivate a great deal of follow up work, for example, we hope to explore how the robot itself (shape, size, voice, etc.) impacts authority, or how such a robot could be used for more positive purposes such as assisting in rehabilitation and training (give me 50!).
While we do not yet know how robots will continue to enter factories, offices, and homes, this study does suggest that robots may eventually take on at least some of the simpler tasks of managers. When a good manager speaks, employees not only listen but act based on what is said. In at least some cases, robots may one day be the ones giving the instructions.



Why Those Guys Won the Economics Nobels
When the Riksbank Prizes in Economic Sciences (a.k.a. the economics Nobels) were announced last fall, the news was greeted with some confusion and amusement. The Swedes had given the award to one guy, Eugene Fama, who is best known for originating something called the efficient market hypothesis, another guy, Robert Shiller, who once called the efficient market hypothesis “one of the most remarkable errors in the history of economic thought,” and a third guy, Lars Peter Hansen, whose work is so dense that even academic economists couldn’t satisfactorily explain it or its connection to Fama and Shiller. The prizes were awarded “for their empirical analysis of asset prices,” but what the three had been doing looked from the outside less like a common endeavor than a not-all-that-coherent argument.
It turns out, though, that there is significant common ground between the three winners. His name is John Campbell.
Campbell is an economics professor at Harvard and one of the most prominent figures in modern financial economics. He got his PhD at Yale under Shiller’s supervision in 1984, but since then he has also done a lot of work expanding on Fama’s ideas about risk and return, some of it co-authored with Fama’s son-in-law and University of Chicago finance colleague, John Cochrane. Campbell’s work has also made liberal use of the analytic tools developed by Hansen. In the long-version explanation of the prizes published by the Nobel committee last fall, Campbell was cited more often than anybody else, apart from the three winners.
Others, most notably money managers and former Fama students Cliff Asness and John Liew in an epic Institutional Investor article, have done a lot recently to clarify how Fama’s ideas and Shiller’s can at least co-exist peacefully. But even Asness and Liew threw up their hands when it came to Hansen. So I wanted to see if Campbell could make sense of the prizes and the current state of academic knowledge about asset prices. It took us a while to line up a time to talk, in part because Campbell was working on an article about the Nobels for the Scandinavian Journal of Economics and wanted to finish that first. Now a draft of the article is available on Campbell’s website, and what follows is an edited transcript of a conversation I had with him Monday.
It’s a quite technical article, and while our conversation was formula-free it got pretty wonky as well. Still, Campbell is a great explainer. So do not be discouraged to learn that we start with something called the stochastic discount factor, which Campbell describes as the central idea of modern asset-pricing theory.
Many lay readers are familiar with John Burr Williams and the dividend discount model, or the discounted value of future cash flows. This stochastic discount factor model is the modern economic update of those, correct?
Yes. There’s fairly broad understanding of discounting when you don’t have to worry about risk. You know, the future value of money, the present value of money — money today is worth more than in the future because you can invest it and get interest.
If that interest rate is just a number that’s constant over time, a super-simple world, then we get the dividend-discount model with a constant discount rate. The tricky question is what to do about risk. We all know that there needs to be some adjustment for risk. If you have a risky proposition you don’t want to discount the returns at the same rate that you would for a safe proposition. But how do you do that?
The naïve thing, which in certain circumstances can be right, is you still think of discounting as a single number but you adjust that number for the risk in the payoffs of this deal you’re being offered. So if it’s a very risky deal you say I’m going to discount using a high discount rate, and if it’s a safe thing I’ll discount at a lower rate. Back in the ‘60s, people developed the capital asset pricing model [CAPM] as a way to do that. You’d have this beta with the market, so you have the riskless rate plus beta times the equity premium. That’s still widely taught in business-school classes and it’s easy for people to understand. [A mini-glossary: beta is the amount that an individual stock fluctuates relative to the overall stock market, and the equity premium is the difference in expected return between stocks and a “riskless” asset such as Treasury bonds.]
Now it turns out that if you think a little more deeply about this, it’s really not right. Instead, what you need to do is scenario analysis. There’s the good scenario where everything works out and your investment makes lots of money. There’s a bad scenario where it doesn’t work out and you end up losing money. What you should do is take each possible scenario and discount that scenario at a rate appropriate to the scenario. Then at the end, when you’ve brought everything to the present, scenario by scenario, you average. That’s called stochastic discounting, because the discount rate that you use is different for every scenario, and thus in a certain sense it’s random, or stochastic.
That’s kind of a deep insight. It’s not something that necessarily resonates a lot with people in the markets or people in the world. It is used in the more technical end of the financial industry, for example in derivatives valuation. A lot of people in what I’ll call the lower-tech part of the financial industry, equity analysts or traditional fundamental investors, they don’t think about this.
But it is a very basic rewrite of how to think about finance, and it has swept all before it in the academic world. It goes back to the 1970s. Steve Ross, who was at Yale and now is at MIT, he did the basic theory. But it’s taken some time, decades really, to work out all of the ramifications. And that’s what these guys [the 2013 Nobel winners] did. They took this notion of the stochastic discount factor and turned it into something empirically useful.
The idea of the “joint hypothesis” that Eugene Fama framed in the late 1960s and early 1970s was that there had been all this research showing that markets reacted really quickly to information and that professional investors didn’t beat the market. Fama said that if you want to say something more about how efficient markets are, how good a job they do of pricing assets, you need some theory of what asset prices should be. And the theory that was available then was CAPM.
One can distinguish between what we call time series models and cross-sectional models. Time series models say how these risk adjustments move over time, and cross-sectional models say how they vary across different assets at a point in time. Now, the world has both, so at the end of the day we need a model that tells us everything. But academics like other people tend to want to work on one thing at a time. So when Fama started, the cross-sectional model was the CAPM, and for the time series people just said well let’s just assume that risks don’t change over time, so whatever risk adjustment there is it’s the same today, tomorrow, and thereafter.
And in those early tests, it seemed like market prices mostly obeyed both CAPM and the efficient market hypothesis.
The early tests that looked at the stock market and looked at short periods of time generally found pretty decent results consistent with the market efficiency insight. There was one exception even back then, post-earnings-announcement drift, which Ball and Brown pointed out in the late 1960s. If a stock has just had surprisingly good earnings, it jumps up but then it tends to keep on going up, and that’s not consistent with market efficiency.
A lot of other stuff that looked at stocks and looked at short periods of time didn’t find much. Financial economists got very cocky in the ‘70s. They said the market’s efficient, discount rates are constant, we know what’s going on, we have this theory, it’s great. But then as is so often the case when people become too cocky, cracks appear in the structure.
There were several types of cracks. One was that when you looked outside the stock market, when you looked at interest rates, you found predictabilities. My first published paper with Shiller back in 1983 was on that, and I did my PhD dissertation on that. And then Fama published results on it. So that was one problem — fixed income. Currencies was another. What we now call the carry trade, that currencies with high interest rates give you excess returns. That was another discovery of the 1980s that was inconsistent with the paradigm.
So that’s problem No. 1. Problem No. 2 is this distinction between short-term vs. long-term predictability. This is what Shiller made his name on, and I later helped him with it. His research agenda was to say that if stock prices are just dividends discounted at a constant rate, then they can’t be more volatile than the stream of dividends that they’re supposed to be forecasting. So it’s mysterious why market prices are moving around so much. Put another way, it turns out that ratios like the dividend-price ratio or a smoothed-earnings price ratio don’t forecast future dividends at all well, but they do forecast future returns. When you look at the market extremes, whether it’s 1929 or 1966 or 2000, those very high prices relative to dividends and earnings are not followed by rapid dividend growth. They tend to be followed by declining prices.
Shiller hammered away on this point in the ‘80s, and in fact Fama also published some of the same observations. He didn’t want a behavioral explanation. His view was that, “Oh, we’re learning that the risk premium over long periods of time actually does move around.”
To me that’s one of the most interesting things about this whole story. Fama had a hypothesis and he went out and tested it, and found that it didn’t really work. As did other people. But there are these two classes of explanations for why it didn’t work. One is that markets are still pretty efficient but risk premia change over time, and the other is that the explanations have to do with behavior. Reading your paper it does feel a little bit like, you say tomahto, I say tomayto.
The economics profession is still struggling with the balance between these things. Most people, if you get them in a room and give them a truth serum will say that there’s some mix of the two. My view, for what it’s worth, is that with some phenomena like the long swings in the market and the premium for value stocks you can get quite a long way with rational explanations. People just get more cautious in recessions than in booms. I have a well-known paper with John Cochrane which argues that this is because people judge their well-being relative to their past experience — the standard of living to which they have become accustomed, as the divorce courts would say.
So in a time like the 1960s, when there’s been a lot of growth, people are feeling rich and they’re willing to take risks because they’ve got a cushion of comfort above their baseline expectation. At a time like the present when things have not been so great, people’s standard of living is much closer to the baseline minimum that they expect, and they don’t feel like they have a big cushion of comfort.
That’s a model in which people have reasonable expectations about the future, they just worry about risk a lot more in bad times than in good times. So my view is that we don’t necessarily have to have irrational beliefs to explain these long swings in the market, although I think Bob is absolutely right that some people do have irrational extrapolative beliefs, and believe the hype when the market goes up.
Even if it’s not irrational to base your judgments on recent experience, it doesn’t feel like the rational man of ‘60s and ‘70s rational-expectations economics. It feels like it’s got a little bit of Kahneman and Tversky in it.
I think it’s fair to say that even those economists who play the rational expectations game have in more recent years written down models of people who may have rational beliefs, but are emotionally volatile. That story I told in the model with Cochrane is of an emotionally volatile rational guy who gets into a funk and is very cautious all of sudden and then a few years later is very aggressive. That’s very different in spirit from the 1960s and 1970s constant-discounting guy. The strong distinction between rational economics and behavioral economics, I think it can be overblown. There’s a border zone where a lot of the literature is.
Now if I can just rewind a little bit, I was saying that in the sort-of heyday in the ’70s there were these cracks in the structure, and the first one was fixed income and currencies, the second one was the long-run behavior of stock prices. The third was anomalies in the cross-section of stock returns. Fama and Ken French said the rewards in the market don’t just come from beta as the CAPM would have it. There’s also a size factor and a value factor. What Mark Carhart later did was add a momentum factor. [Mini-glossary: The size factor means that small-cap stocks outperform bigger companies. The value factor means that cheap stocks, as measured by price-to-book, price-to-earnings, or some other such ratio, outperform expensive ones. Momentum means that once a stock heads up or down, it tends to keep going in that direction.]
In my view, rational models have plenty of ways to explain the value premium, but the phenomenon of momentum is really very hard for a purely rational model to explain. If there’s one thing that should make Gene Fama uncomfortable, it’s probably momentum. The behavioral story about momentum is that a lot of people aren’t paying enough attention to fundamental news, so there’s money to be made by, whenever you see prices go up, jumping on it and driving them up more.
We should get back to Hansen. You’ve got this wonderful line in your paper about meeting him and “sensing that his penetrating insight would require effort to fully understand but would amply reward the undertaking.”
Lots of distinguished economists have had this experience of either reading a Lars Hansen paper or listening to a Lars Hansen presentation and feeling that it’s a sort of message from the future — like an alien artifact dropped from a flying saucer. That is, potentially amazing technology if you can only figure out how it works. Lars is famous for that.
But at this point a lot of his work can be translated, and is widely used and understood. Gene Fama said we can’t test market efficiency unless we have this auxiliary model. Hansen said, well, hang on a minute. Suppose we believe that the market is efficient if we have the right model, but suppose that this right model that we have in mind has some unknown parameters that come from the impatience of investors or the risk aversion of investors or other features of the world — and we want to know what these parameters are. Well, presumably the right parameters are the ones that, when plugged into the model, make returns as unpredictable as possible. Because we know that if the market’s efficient and we have the right model, then returns are unpredictable.
You’ve explained that in a way that I almost understand it. Is that the GMM [generalized method of moments]?
That’s the GMM.
It sounds useful.
It’s extremely useful. It’s sort of the standard method that any of us use when we come up with some model. We say oh, that’s a nice model, how are we going to see if it works and what these unknown parameters are? It’s a universal tool, and it’s very, very important.
So that’s the first thing about Lars. The other place where I bring him in is next to the discussion of behavioral finance, because Lars has also worked on models of in-a-way-irrational beliefs. Although what Lars does sort of blurs the distinction between rational and irrational beliefs.
He draws on an engineering literature called robust optimal control. Suppose you’re an engineer and you’re building a bridge. You have some physical understanding of the way in which the traffic is going to vibrate the bridge and cause stresses on the supports, but there’s a range of possible models of how much vibration the trucks are going to make. So what you might do as an engineer is try to build your bridge to be safe in a worst case. You can’t literally take the ultimate worst case, because your bridge would be infinitely expensive. You’re going to have to take a worst reasonable case. The discipline of engineering has evolved to do precisely that. They have rule-of-thumb methods, but they also in recent years have developed a more mathematical, abstract approach called robust optimal control. And Lars has taken some of these ideas and applied them in finance.
He’s saying you don’t know how good the return on the stock market is going to be. There’s a range of pundits, they all seem to say different things. Maybe you want to invest in a way that will give you good results even in a worst reasonable case. So if you were inclined to put a lot of money in the stock market you say no, I won’t do that because the equity premium might be very low and then I’m taking a lot of risk and not getting much reward. So perhaps I’ll invest as if the equity premium is lower. He takes this perspective of almost deliberately choosing to have beliefs that are pessimistic to protect yourself in case things go wrong. These beliefs aren’t rational in the literal sense, but nor are they crazy. He uses this word “robust”: they’re defensible beliefs that have this degree of pessimistic conservatism built into them.
They sound pretty, well, rational.
It’s another way in which this stuff blurs the distinction between rational and irrational. In a way it’s surprisingly close to Shiller, but Shiller’s view is that people can’t know the true model and then become excessively influenced by social forces, and can get into a herd mentality. In Lars’s world people don’t have the true model, but they know that they don’t have the true model and they react by being very conservative. Lars’s guys are irrational in a level-headed way, and Bob Shiller’s guys are irrational in a way that is subject to these social fads and fashions.
It seems like the clearest practical lessons from this academic work have been in asset management. There are people very much steeped in this work, including you, who are out there managing stocks and other assets based on it. The sense I’ve gotten, and I’ve talked to Cliff Asness a lot, is that markets are pretty efficient but there are all these little things going on that if you’re careful about it you can take advantage of.
The late, great Paul Samuelson once talked about micro efficiency and macro inefficiency. What he meant was that if the inefficiencies you’re looking for are relative mispricings of different types of stocks, that will be corrected fairly soon, because if they got big there’s a lot of easy money to be made. Cliff Asness would be all over it, and the mispricing would disappear in no time. Whereas if you’re talking about macro things, the big, long swings in the market, the fact that stocks were so expensive in the 2000s and so cheap in the fall of 2008 is very difficult to arbitrage away. You have to be a macro hedge fund, and not only do you the hedge fund manager have to have nerves of steel, your clients have to have nerves of steel.
A theme of Bob’s work is that these long swings over time in the market can have big effects on prices and be large inefficiencies. Whereas when it’s arbitraging away small price discrepancies across similar assets at a point in time, that can be done easily so you’re likely to find that the deviations are small.
But because that’s less dangerous to do, that’s the direction that most quantitative money managers that come out of the academy actually take, right?
Absolutely. A lot of my work is on big asset allocation themes and things that play out over many years. But when I’ve spent time in the industry it’s with a quant equity firm that’s trying to pick stocks and beat an index and show results in a reasonable period of time to clients. What we often find is that asset allocation is something the clients themselves want to manage. The results take years to play out and it’s hard to set up a contract with an asset manager to hire them to do this because it takes so long to see if they’re right or not.
Another area where these debates have some resonance is in policy —monetary policy, financial regulation and the like. There it seems like this macro inefficiency is actually pretty important.
Absolutely. As we think about the stability of the financial system, large swings in asset prices and big changes in risk premia can be very important, and relatively hard to arbitrage away.
It seems like the lesson that came out of the ’60s and ’70s is that more finance is better. More people out there trying to arbitrage away all these inefficiencies will make markets more efficient. At a micro level sure, that’s probably right. But at a macro level maybe they make the swings bigger.
There’s a very meaningful debate about that now. If we come back to the Nobel guys, one of the interesting things about Shiller is that despite being famous for his work on bubbles, he doesn’t say let’s shut it down. Instead what he says is let’s have financial innovation that is actually helpful. His vision of financial innovation is that by designing the right instruments, you could help people be more rational, because you could focus their attention on the things that matter.
One simple example would be if you take a stock and break it into claims to dividends at different parts of the future — the near term and then the next year and then the year after that and so on — and you trade them all separately. It forces people to recognize that the value of the stock should be the value of all these claims to dividends in particular years. If you want to pay a fortune for this stock, you have to recognize that you’re paying a fortune for a claim to cash that will be paid in some particular future year. It kind of focuses your belief about when the ship is going to come in. The pot of gold can no longer be just at the end of the rainbow. You have to say where it is exactly. And that might help people be more rational.
The capital asset pricing model was supposed to allow companies to calculate their cost of capital in a consistent way. Now that nobody seems to think that risk premia stay the same over time or that beta really does reflect everything that matters about risk in the stock market, it seems like there is no one way to calculate the cost of the capital anymore. But everybody still uses the method that came out of CAPM.
You’re right. It’s a weakness of modern finance that we haven’t been able to deliver something that is as useful as the CAPM. I remember there was a Fama and French paper with the wonderful title, “The CAPM Is Wanted, Dead or Alive.” Which basically says, well, we need a model, and even if this model is in some sense dead, it’s still wanted and still used.
I think we’re groping our way towards better procedures. It’s like the CAPM is an aging champion and there are all these wannabes that would like to replace it, and none of them have quite come to the fore yet. I have an entry in the competition, an intertemporal model in which I break the market movements into permanent shocks driven by cash flows and temporary shocks driven by discount rates, and I show that one of them should have a higher price of risk than the other. It’s like in the old days cholesterol was the measure of risk, and now we know there’s good cholesterol and bad cholesterol. So in that framework what do is you calculate the beta of your firm or your project with two components of the market return, and one of them is the one that you really worry about. It’s a relatively small change in the procedure, but I have some hopes that in the long run it might prevail. But these changes in thinking take a very long time.



April 1, 2014
The Art of Raising Prices: Lessons from Amazon Prime
With the economy on the upswing, most companies are contemplating increasing prices. Managers are rightfully anxious about making this move – it’s a sensitive issue for customers. While no one enjoys paying more, the right series of messaging tactics can tame potential backlash.
Amazon recently hiked the price of its Prime service, which includes two-day shipping, Kindle book loans, and streaming video. Raising Prime’s price is especially risky as it’s a key marketing conduit that draws in and engenders loyalty from customers. Analysts estimate Prime members spend over double compared to the average Amazon patron. With a P/E ratio exceeding 550, Wall Street is expecting Amazon to continue dazzling investors with eye-popping annual revenue increases. As a result, the Seattle-based retailer needs to keep Prime – its key engine for growth – in tune.
Amazon did a solid job of raising the price of Prime from $79 to $99. Given its success, managers of all companies can learn from the tactics it employed:
Float a Scarier Number. Amazon initially hinted it was contemplating a $20 – $40 increase. So when it formally announced “only” a $20 boost, customers let out a sigh of relief (“whew, it could have been worse”). The tactic of pre-announcing a possible 25% – 50% price increase was done to place the 50% figure in the minds of consumers and then beat this expectation by going with 25%. A good one, Amazon!
Blame Costs. Consumers don’t want to hear prices are rising to sweeten executive paychecks or fatten shareholders’ 401(k) accounts. It’s textbook to cite higher costs as justification. On cue, Amazon identified the culprits as higher shipping and fuel costs.
Appeal to Fairness. Consumers are receptive to a price hike if they perceive it as fair. Amazon clearly noted that it has not raised Prime’s price since its inception nine years ago. C’mon, nine years is a long time!
Hint That There’s More to Come. Rumors have been swirling that Amazon will be adding an additional feature to its Prime package – streaming radio. More features translate into a better deal for customers.
Don’t Cave to Simple Market Research. UBS conducted a survey indicating a $20 price increase would put 42% of current Prime customers at risk of ditching the service. This led the Swiss-based financial services company to downgrade Amazon’s stock from a “buy” to “neutral.” There are two problems with interpreting this research. First, customers aren’t likely to be amenable when asked about their feelings of a higher price – thus, their responses are biased downwards. Equally important, survey respondents were probably not exposed to Amazon’s reasoning on why it is increasing prices. This spin is critical in how customers evaluate a price increase.
While Amazon did a good job of raising prices, let’s call it a B+, there is more it should have done:
Remind Customers of the Value It Provides. At $99 (or even $119) a year, Prime is a great value. From a streaming-only perspective, Netflix charges $96 annually (with admittedly a more comprehensive selection). The once-a-month Kindle lending library ostensibly saves Prime members, say, $9.99 a month, from otherwise purchasing e-books ($120 annually). Finally, two-day shipping is a big time- and money-saving service. How convenient is it to click to purchase a $5 pair of scissors instead of incurring the time and gas expense of visiting a local Target? Saving 15 – 20 minutes here-and-there in all of our busy schedules is meaningful, hence valuable. Even if members don’t currently use all of these services, this value-reminder may encourage them to do so – which deepens loyalty.
Resist the One Price Fits All Mindset. As I have previously argued, to effectively grow a business, one price does not fit all. Offering consumers a range of good-better-best options truly enables companies to profit and grow. Price sensitive customers purchase the good option while consumers who highly value a product spring for the best. Since Prime is so critical to growth, it needs a lower price point entry version to attract new customers. Amazon should have tiered its Prime program.
Raising prices is an angst-inducing ritual for all companies. The art of raising prices involves a clear yet sensitive dialogue with customers to justify the lift. As Amazon demonstrated, even a 25% price increase can be palatable if pitched correctly.



Badges? We Don’t Need No LinkedIn Badges
The son of a Texan friend of mine decided earlier this year not to go to college, as someone like him would surely have done a few years ago. Instead he dropped out of high school and went off to interview potential employers in San Francisco. A talented programmer, he accepted one of a number of job offers, turning down Google in the process, and is already hard at work.
To me this means that he has, in essence, made the calculation that the use of social networks will replace the “badges” that my generation valued to determine reputation. Having spurned Eric Schmidt’s advice from SXSW that everyone should go to college, he has begun to stockpile the social capital that will power the post-industrial revolution. He has opted for the social network over the fraternity as the basis for future advancement, and I hardly think he will be the last to do so.
He is bypassing the “hack” of using badges as a substitute for more valuable and accurate information that used to be too expensive to gather. A college degree is a hack? Yes. At an “Unconference” my colleagues and I hosted in Palo Alto last year, Sam Lessin, the Head of the Identity Product Group at Facebook, talked about the way in which human societies evolved trust networks to increase efficiency by connecting with more trading partners and by capturing more information about those partners over time. Money became a key element of these trust networks because it was cheaper to trust the money than the credit of a counterparty beyond your clan, village, or tribe. In time, the networks grew, and it became cheaper to trust other intermediaries too, rather than gather and collect and store the information about the trading partners.
Sam introduced me to a useful way of thinking about these intermediaries, what he called social “hacks.” Shortcuts. Workarounds. Approximations. These hacks are things like badges, diplomas, dress codes, and, as it happens, credit ratings. However, because of what he called the “superpower” all of us have now gained – the ability to instantly communicate with anyone else on Earth – we will no longer be needing those hacks.
This way of looking at the existing business models around identity (that is, as being hacks in response to incomplete authentication, attributes and reputation data) provides a good way of understanding the logic of what my acquaintance’s son has done.
As social capital (the result of the computations across the social graph) becomes accessible and useable, the hacks will fade. A college degree will be worth less than it is now. Using hacks instead of real data is just not good enough in a connected world. Google was famous for its rigorous hiring criteria, but when its analysts looked at “tens of thousands” of interview reports and attempted to correlate them with employee performance, they found “zero” relationship. The company’s infamous interview brainteasers turned out not to predict anything. Even more interesting: Nor did school grade and test scores. As job performance data racks up, the proportion of Google employees with college degrees has decreased over time. It’s a development that Rory Sutherland, the Vice Chairman of Ogilvy & Mather UK echoed when he recently wrote that he was unable to find any evidence that “recruits with first-class degrees turn into better employees than those with thirds (if anything the correlation operates in reverse).”
You can see exactly where this headed if you think about the way that we already use LinkedIn, Twitter and Quora. In the old world, I would use the social hack of finding out which university your degree came from as a sort of proxy for things I wanted to know about you if I was recruiting. But I no longer need to do that because from the social I can find out if you are smart, a hard worker, a team player, an expert on the endochronic properties of resublimated thiotimoline or whatever. So there’s less premium for your learning, say, biochemistry at Harvard rather than Swindon Polytechnic: as long as you know the biochemistry, my hiring decision will be tied to your social graph, not the hack of institutional badges.
It is happening now because the powerful combination of the mobile phone, the social graph, and new authentication technologies is reducing the cost of using social capital effectively at a transaction level. Hacks such as high-school diplomas and glossy CVs are being replaced by social capital because the social graph is a more efficient form of the kind of memory we need to make transactions work.
Personally, I still think there is something to Eric Schmidt’s advice. I didn’t go to college only to learn about Physics, but to be socialized. I found out about politics and arguing, about learning and art, about curiosity and community. What I suspect, therefore, is that the college degree is not about to disappear but about to transform. Maybe two years rather than three or four will be sufficient for a great many people across a great many disciplines. I’ll still want a doctor who went to medical school, but perhaps I’ll want a programmer from the school of LinkedIn. Most of the working world will fall somewhere in between.



The Sexiest Job of the 21st Century is Tedious, and that Needs to Change
As organizations collect increasingly large and diverse data sets, the demand for skilled data scientists will continue to rise. In fact, it was dubbed “The Sexiest Job of the 21st Century” by HBR.
Unfortunately, the day-to-day reality of the role doesn’t quite match the romanticized version.
Starting in 2012, my colleagues and I began taking a closer look at the hands-on experience of data scientists. At Stanford, I conducted 35 interviews of data analysts from 25 organizations across a variety of sectors, including health care, retail, marketing, and finance. Since then I’ve spoken with another 200-300 analysts. What we found was that the bulk of their time was spent manipulating data − a mix of data discovery, data structuring, and creating context.
In other words, most of their time was spent turning data into a usable form rather than looking for insights.
Granted, this stems from a positive shift in analytics. Whereas companies once maintained tight control over data warehouses, they are now shifting toward more agile analytic environments because the drive for data-driven decision-making has catalyzed the need for a different type of work. Today, data quality is no longer about a central truth but is instead dependent on the goal of the analytic tasks. Exploratory analysis and visualization require that analysts can fluidly access disparate sources of data in various formats.
The problem is that most organizations aren’t set up to do this. In traditional data warehousing environments, IT teams structure data and design schemas when the data is loaded into the warehouse,and are then largely responsible for ensuring strict data quality rules. While this upfront design and structuring is costly, it worked fairly well for years. Now that companies are dealing with larger and more complex data sets, however, this old way of managing data is impractical.
To keep pace, most organizations are currently storing raw data and structuring on demand. Schemas and relationships between datasets are now derived at time of use instead of at the time of load. This shift gives data analysts more flexibility to find unexpected insights, but also places the time-consuming onus of discovery, structuring, and cleaning solely on them.
Indeed, in our 2012 study of data analysts, we characterized the process of data science as five high-level tasks: discovery, wrangling, profiling, modeling, and reporting. Most analytic and visualization tools focus on the last two phases of this workflow. Unfortunately, most of a data scientist’s time is spent on the first three stages.
These three involve finding data relevant for a given analysis tasks, formatting and validating data to make it palatable for databases and visualization tools, diagnosing data for quality issues, and understanding features across fields in the data. In these phases, data scientists encounter numerous challenges, including data sets that may contain missing and erroneous or extreme values. These tasks often require writing idiosyncratic scripts in programming languages such as Python and Perl, or extensive manual editing using tools like Microsoft Excel. But if not caught, this can cause any assumptions made to be wrong or misleading – poor data quality is the primary reason for 40% of all business initiatives failing to achieve their targeted benefits.
Because of this, the skills of talented data scientists are often wasted as they become bogged down in low-level data cleansing tasks or encumbered when they cannot quickly access the data they need. This creates a huge bottleneck, stalling the progression of data as it moves from data stores like Hadoop to analytic tools that allow for greater insights. Data cleansing and preparation tasks can take 50-80% of the development time and cost in data warehousing and analytics projects.
Instead of solving these problems, organizations are often adding to the amount of data that require a data scientist’s attention. Through activity and system logs, 3rd-party APIs and vendors, and other publicly available data, companies have access to an increasingly large and diverse set of data sources. But without the right systems in place, the prohibitive cost of data manipulation leaves much of this data dormant in “data lakes.”
And by making data analysis a core business function for many departments, skilled analysts and members of IT are spending large chunks of time helping others access the data they need via low-level programming instead of doing any analysis themselves.
According to Gartner, 64% of large enterprises plan to implement a big data project in 2014, but 85% of the Fortune 500 will be unsuccessful in doing so. These time-consuming data preparation tasks are largely to blame. Not only do they throttle individual data scientists, but they greatly decrease the probability of success for big data initiatives.
If we can ever hope to take full advantage of big data, data preparation is going to need to be elevated out of the manual, cumbersome tasks that currently make up the process. Data scientists must be enabled to transform data with greater agility, not just manually prepare data for analysis. Domain experts will need to be able to explore deeper relationships between data sets without data being diluted by prolonged IT programmer or data analyst involvement.
Ultimately, the goal of data analysis is not simply insight but improved business process. Successful analytics can lead to product and operational advancements that drive value for organizations, but not if the people charged with working with data aren’t able to spend more of their time finding insights. If data analysis ever hopes to scale at the rate of technologies for storing and processing data, the lives of data scientists are going to need to get a lot more interesting.



Seven Things Great Employers Do (that Others Don’t)
For most people, paid work is unsettling and energy-sapping. Despite employee engagement racing up the priority list of CEOs (see, for example, The Conference Board’s CEO Challenge 2014), our research into workplaces all over the world reveals a sorry state of affairs: workers who are actively disengaged outnumber their engaged colleagues by an overwhelming factor of 2:1. The good news is that there are companies out there bucking the trend, and we’ve discovered how.
Over a five-year timeframe, we studied 32 exemplary companies (collectively employing 600,000 people) across seven industries including hospitality, banking, manufacturing, and hospitals. At these companies, the engaged workers outnumber the actively disengaged ones by a 9:1 ratio. To understand what drives that tremendous advantage, we looked for contrasts between them and a much larger set of companies we know to be struggling to turn around bland and uninspiring workplaces.
We found seven elements in place at the companies with spirited employees which are notably lacking in the others. Are all of the seven causes of high performance? No doubt at least some of them involve virtuous circles. But as a recipe for an engaged workforce, these are ingredients we feel confident in recommending:
Have involved and curious leaders who want to improve. Leaders’ own attitudes, beliefs, and behaviors have powerful trickle-down effects on their organizations’ cultures. Leaders of great workplaces don’t just talk about what they want to see in the management ranks – they model it and keep practicing to get better at it every day with their own teams. By displaying a little vulnerability and visibly working on improving themselves, they signal that such engagement is how one gets ahead.
Have cracking HR functions. The best HR people have a gift for influencing, teaching, and holding executives accountable – this is important because many executives rise through the ranks despite not being very good managers. HR experts teach leaders and managers to stretch and develop employees in accordance with their natural capabilities. By the way, when you find cracking HR leaders, hold on to them for dear life: they are as rare as hen’s teeth.
Ensure the basic engagement requirements are met before expecting an inspiring mission to matter. When employees know what is expected of them, have what they need to do their jobs, are good fits for their roles, and feel their managers have their backs, they will commit to almost anything the company is trying to accomplish. Conversely, if these basic needs are not met, even the most exalted mission may not engage them. People simply don’t connect with proclamations of mission or values– no matter how inspiring these might sound in the head office.
Never use a downturn as an excuse. The excuse we hear the most to explain away a lousy workplace is the state of the economy; in periods of belt-tightening, engagement inevitably takes a hit. The experience of the 32 exemplary companies we studied calls this rationalization into question. With few exceptions, they have also had to respond to flat or declining top lines – with structural changes, redundancies, and declining real pay and benefits – and yet not only have they maintained their strong cultures, they’ve improved them. They have achieved this by being open, making changes swiftly, communicating constantly, and providing hope. The truth is that employee engagement is one of the few things managers and leaders can influence in times when so much else is out of their control. Great employers recognize this and they go about managing it in the right way.
Trust, hold accountable, and relentlessly support their managers and teams. The experiences that inspire and encourage employees are local. Strong teams are built when teams themselves size up the problems facing them and take a hands-on approach to solving them. Exemplary companies lavish support upon their managers, build their capability and resilience, and then hold them and their teams accountable for the micro-cultures they create. (There is an important corollary here: the good intentions of a CEO can backfire if he or she charges all over the company trying to fix things personally.)
Have a straightforward and decisive approach to performance management. The companies in our study with the highest engagement levels know how to use recognition as a powerful incentive currency. Indeed, a hallmark of these great workplaces is that they are filled with recognition junkies. These companies see recognition as a powerful means to develop and stretch employees to new levels of capability. Meanwhile, they see tolerance of mediocrity as the enemy. Any action or inaction that doesn’t produce appropriate consequences adds to workplace disillusionment and corrodes commitment.
Do not pursue engagement for its own sake. As it becomes increasingly possible to measure and track engagement accurately, some companies start “managing to the metric.” Great employers keep their eyes on the outcomes they need greater engagement to achieve. One of the best examples we can cite is the Hospital for Special Surgery in Manhattan. Ranked number one in the U.S. for orthopaedic surgery by U.S. News & World Report, this hospital needs a high-octane culture to meet patients’ demands. Senior Vice President of Patient Care and Chief Nursing Officer, Stephanie Goldberg, told us that patients expect miracles and her nurses would struggle to get through a single day if they themselves did not feel that they mattered to the hospital. HSS’s nurse turnover is lower than the industry average, let alone the average in hospital-rich New York.
There they are, then: the magnificent seven. Now note how different the list is from the tactics most companies are pursuing as they try to create great working environments. Many make the mistake of prioritizing the easy, shiny stuff – hip office space, remote work arrangements, and inventive benefits – over the elements that will strengthen emotional ties and connect employees more deeply to their managers, teams, and companies. Pity them: If they manage to survive and compete, it will be despite their miserable and confused staff.
Pity their employees more. Our research into a representative sample of nearly all the world’s adults shows that a job has the potential to be at the heart of a great life, but only if its holder is engaged at work. Copious amounts of prose has been devoted to how to make this happen – by making work more fun, funky, and even meaningful – but companies still fail. The exemplary companies we studied have figured out how to establish emotional connections with their staff. It isn’t easy, but if you focus on the magnificent seven, you too can create a company where people love their work.



If the Board Monitors the Company, Who Monitors the Board?
With equity holdings of major firms now in the hands of a relatively small set of mega-holders, big investors have helped rejuvenate the boardroom. The board’s independent monitoring function had always been there in name, but now it has become more widespread in fact:
The boards of more than nine out of ten of the S&P 500 companies have a lead or presiding director, up from none a decade earlier.
Poison pills have declined during the past fifteen years from 59 percent of the S&P 500 to just 7 percent.
More than 90 percent of the firms now elect all directors annually, up from 39 percent in 1998.
The CEO is the only non-independent director on 60 percent of the boards, up from 23 percent.
Executive pay has morphed from fixed to contingent. In 1982, a manufacturing executive arriving at work on the first day of the new year could expect at least three-fifths of the pay package by the end of the year, regardless of company performance. By 2013, that fixed portion had dropped from 63 to 16 percent while the incentive fraction soared from 17 to 66 percent.
With institutional investor prodding, boards have thus become far more effective monitors of management than they were a decade ago. Yet in doing so they have strengthened the board’s leadership hand as well. Chief executives still run the corporation, but directors at many are now increasingly stepping forward to lead the corporation in partnership with management.
This, as a result, brings a whole new frontier to investor vigilance: Appraising whether company directors are indeed effectively leading, not just monitoring the firm. And that can make an enormous difference. Even a company with the best monitoring practices is a worrisome bet if the lead director is not able, if directors do not bring business leadership to the boardroom, and if directors are not all pulling in the same direction. Without a board that can lead, everything from the tone at the top and pay for performance to executive succession and strategic direction may lack the stoking they require.
Consider Blackstone, a publicly traded private equity firm that has a long history of acquiring, strengthening, and then profitably selling companies. It evaluates a thousand investment targets a year, vets a hundred, and invests in just a handful. An important filter has become the quality of the board’s leadership, and to help appraise it, Blackstone brought in Sandy Ogg who had served as the chief human resource officer for Unilever.
When Blackstone takes a major stake in a company, Sandy Ogg then works actively with directors at the new firms to maximize their leadership value. He presses the directors “to be active” with management — but at the same time “not too active.” He has worked to force too-active directors off their boards, but he also warns directors against too little engagement, reminding them that they have been asked to serve for their business experience and professional judgment, not for their résumé or renown.
Or consider the experience of Irvine O. Hockaday Jr., former CEO of Hallmark Cards, Inc., who had served as lead director for four companies, including Ford Motor and Estée Lauder. At all he had to become, in his own words, “the connecting rod to the rest of the board,” where he helped define its modus operandi, bridge the divide between the board and management, and encourage and insure coordination of the board’s audit, compensation, and governance committees. He served as a sounding board for both directors and executives, and, when conflicts inevitably emerged, as reconciler. Personal connections were vital to such service: “So much about being an effective lead director” is a result, he reported, of “your ability to establish relationships and work well in the context of the DNA of a particular board.” In sum, he said, “I look at the lead director as a conductor, like a symphony director, working to ensure maximum collaboration among directors and maximum support of management.”
In an era when boards can and should both monitor and lead, institutional investors will want evidence that the lead director can indeed lead. And among the questions to appraise that leadership function: Has the board picked the right lead director and established a procedure to identify the next? Does the lead director conduct effective executive sessions and make sure that the CEO is receiving true feedback from the directors? Is the lead director able to work well with top executives—but also ready to ensure that a faltering CEO is either mentored or removed? Has the lead director arranged a way for directors to communicate directly with investors? Does the board annually evaluate the performance of the lead director? Has the lead director arranged for the best prepared directors to serve as chairs of the key committees? Does the lead director regularly consult off-line with the other directors? Is the lead director focusing the directors on the company’s greatest challenges? And most important of all, are the directors actively leading the company on key decisions in partnership with the executives, not just monitoring them?
Though extracting information on those factors may be hard, we believe it can be well worth the time. James G. Cullen, who had served as lead director at Johnson & Johnson for a decade, summed up what many lead directors have repeatedly told us: “A skilled board leader can wring a lot out” of the board’s deliberations, and that can be vital to the “successful strategic momentum of the business.”
Investors have long been concerned with the monitoring function of the board, and what is needed now is an equal focus of whether it is also a well-led board, with
a leader who organizes and directs the board
a strong governance and nominating committee
a working partnership with top management
active directors who bring extensive leadership experience of their own
an absence of dysfunctional directors
an annual evaluation of both individual directors and the whole board
a set of protocols for making or delegating decisions
a commitment to lead, not just monitor, the company.
Though vital, much of this is not public, and investors will want to sit with lead directors and their chief executives both to learn about it and to advocate it. For their part, lead directors and their CEOs will want to offer a compelling story about not only the company’s strategy but also about their board’s leadership.
Ram Charan, Dennis Carey, and Michael Useem are offering a two-day program on “Boards That Lead” at Wharton Executive Education on June 16-17, 2014.



Does Your Company Have Enough Sales Managers?
A healthcare industry sales executive recently told us that as part of a continued effort to cut costs, her company had reduced the number of first-line sales managers from 66 down to 30 over a period of several years. This meant that management span of control had more than doubled from an average of 5-6 salespeople per manager up to 12-15 per manager. Certainly, the move saved costs, but was it a good idea?
The average span of control for U.S. sales forces is 10-12 salespeople per manager, but there is wide variation around this average. At an energy company that sells to large utilities and industrial organizations, sales teams work with customers to deliver technically complex, custom solutions; sales managers each supervise an average of 6-8 strategic account managers. At the other end of the spectrum, at a consumer packaged goods company, part-time merchandisers perform activities such as stocking shelves, setting up displays, and conducting inventories in retail stores. The merchandising force operates with an unusually high span of control of 50 merchandisers per manager.
Span of control decisions affect sales management efficiency and effectiveness. If sales managers oversee too few salespeople, the sales force incurs high costs and underutilizes management talent. Managers may micromanage their people. They may get overly-involved in customer management tasks that salespeople should do themselves. And they may spend too much time doing low-value administrative work. Sales management is inefficient. Alternatively, if sales managers have too many people reporting to them, they can’t spend enough time coaching and supervising each person. Salespeople will have unequal skill and quality, and will execute the sales process with varied success. Managers won’t have enough time to spend with key customers or to develop strategies for driving long-term business success. Sales management is ineffective.
The best way to figure out the right span of control is to first understand what sales managers do, what they should do, and how much time it takes to execute the responsibilities that can’t be delegated. Management tasks fall into three categories.
People management. This includes hiring, coaching, supervising, and conducting performance reviews. Our recent survey of sales leaders indicates that most sales managers spend 30-55% of their time with people management, but the percentage varies with the span of control as well as with the amount of time it takes to manage each salesperson. Time per salesperson depends on the specific people management tasks, as well as on the complexity of the sales process, the knowledge and experience of salespeople, the quality of sales support (e.g. information systems, onboarding, and training), and the extent to which salespeople are empowered to act without close management supervision.
Customer management. Thisincludes account planning, customer visits, and assisting salespeople appropriately with important sales process steps and key customers. Our survey indicates that most sales managers spend 25-40% of their time with customer management, but this percentage varies with the number of customers managers are responsible for, as well as with the nature of manager selling responsibilities and the size and needs of each customer.
Business management. This includes sales meetings, budgeting, complying with administrative requirements, and other activities that keep information flowing between headquarters and the field. Our survey indicates that most sales managers spend 20-35% of their time with business management. The percentage does not vary significantly with the number of salespeople or customers that managers are responsible for, but it does vary by situation. For example, business management time is often greater when sales managers control local budgets and resources, or when they must adapt sales strategies to local needs.
Understanding how sales managers spend their time often highlights productivity improvement opportunities. Additional findings from our survey include:
Too often work with low value to customers and the company creeps into the sales manager’s job. This includes many easy but time-consuming administrative tasks. The urgent nature of these tasks prevents managers from performing higher impact (and usually more difficult) duties.
Although some business management activities are important for long-term success (sharing market insight, developing local business plans), the business management role is too often a manager time trap. Most managers spend too much time on administrative business management and too little time on people management. The sales leaders we surveyed indicated that on average, sales managers should shift a half day each week from business management to people management.
By eliminating low-value work, or delegating it to less-expensive resources, some sales forces have opportunity to increase span of control while focusing managers’ attention on higher-value activities.
So was the healthcare company better off operating with 5-6 salespeople per manager or with 12-15 per manager? The answer is not driven by costs alone; it also depends on management effectiveness. A company needs enough sales managers to ensure that all key people, customer, and business management tasks get executed well. At the same time, a company must ensure that non-critical, administrative tasks aren’t polluting the sales managers’ role. Finally, a company must understand how the role is changing so it can build a sales management team that can drive success today and in the future.



Marina Gorbis's Blog
- Marina Gorbis's profile
- 3 followers
