Marina Gorbis's Blog, page 1416

May 22, 2014

Three Quick Ways to Improve Your Strategy-Making

The standard strategy processes at most companies share three common characteristics: 1) you wait until the annual strategy review to revisit your strategy; 2) you put together a SWOT analysis as input to the start of the strategy process; and 3) you start the strategy process with a long and arduous exercise to wordsmith a mission/vision statement or organizational aspiration.


These activities are, no doubt, reassuring and familiar. They are also almost completely useless. Let’s take a look at each in turn:


The annual strategy cycle


Last time I checked, competitors don’t wait for your annual strategy cycle to attack, customers don’t wait for your annual strategy cycle to shift their preferences, and new technology doesn’t wait for your annual strategy cycle to leapfrog yours.


Strategy can’t wait for bureaucratic, non-market timing. When you make your strategy choices, you need to specify what aspects of the competitive marketplace — consumer preferences, competitor behavior, your own capabilities — have to remain true for the strategy to be a good one. Then, you need to monitor those religiously.


If those facts about the marketplace don’t change, then revisiting strategy is unnecessary and unhelpful. But as soon as any one of them ceases to hold true, the strategy needs to be revisited and revised. Waiting for a pre-ordained time to do that has only benefits your competitors.


The up-front SWOT analysis


Perhaps the single most common way to kick off a strategy process is with a SWOT analysis. However, there is simply no such thing as a generic strength, weakness, opportunity, or threat.


A strength is a strength only in the context of a particular where-to-play and how-to-win (WTP/HTW) choice, as is the case for any weakness, opportunity and threat. So attempting to analyze these features in advance of a potential WTP/HTW choice is a fool’s game. This is why SWOT analyses tend to be long, involved, and costly, but not compelling or valuable. Think of the last time you got a blinding insight on the business in question from an up-front SWOT analysis. I bet one doesn’t come to mind quickly. The up-front SWOT exercise tends to be an inch deep and a mile wide.


The time to do analyses of the sort that typically turn up in SWOT analyses is after you have reverse-engineered a WTP/HTW possibility. That will enable you to direct the analyses with precision at the real barriers to making a strategy choice — the exploration will then be a mile deep and an inch wide.


Writing a vision or mission statement


Typically right after the SWOT exercise, the strategy team turns its attention to producing a vision or mission statement. This often devolves into a long and arduous process during which the team members argue sincerely about specific word choices in order to produce the “perfect” statement.


Unfortunately, you can’t nail down your vision/mission statement (or what I refer to as your Winning Aspiration) without having made your where-to-play/how-to-win (WTP/HTW) choice. Spending time wordsmithing a vision/mission statement before making a WTP/HTW choice is a colossal waste of time.


That doesn’t mean an aspiration is unhelpful. So, take a quick first pass at a statement before diving into WTP/HTW. But don’t spend more than an hour on it — and then keep revisiting it during and after the making of the WTP/HTW, capabilities, and management systems choices.


If your strategy process is anchored in these three activities, you can expect to see a big improvement if you simply put the cart back behind the horse and lead with some decisions about where to play and how to win.


Strategy making is not about unearthing and implementing a causal chain from the first principles of market conditions and existing capabilities towards the one right market position. It’s about making choices and taking gambles to get where you want to be. Make your strategy process reflect that fact.




 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 05:00

May 21, 2014

Will You Be Able to Repay That Student Loan?

This month, thousands of college seniors are tossing their mortarboards in the air – and getting ready to start paying off their student loans.


But will they be able to? A recent National Bureau of Economic Research working paper by Lance J. Lochner and Alexander Monge-Naranjo takes a closer look at the problem, going beyond simple default rates and looking at repayment patterns, and the total amount owed, more closely. They researched graduates who were not currently making any payments 10 years after finishing school, either because those borrowers were in default or because they had received a forbearance or deferment on their loans. (Deferments and forbearances are more common in the early post-college years, and considered more serious 10 years out.)


One big determinant: how much money you make after you graduate. The researchers found that a $10,000 increase in your post-school salary is equivalent to 1.2% in increased repayment amounts.


It also matters where you went to school. Graduates from four-year colleges tend to repay more of their debts (see the point above about making more money). Two-year colleges and for-profit colleges turn out the most defaulters (and more drop-outs), even though their debts are lower. (Critics of for-profit schools blame the schools for this; the schools themselves say they are simply serving a more financially precarious population, in essence shifting the blame to their students.) Students attending historically black institutions tended to graduate with less-than-average debt, although the researchers warned that the sample size here was too small to draw specific conclusions.


Finally, it also matters how much you borrowed. For every additional $1,000 borrowed, the likelihood of nonpayment rises by 0.4 percentage points. Put differently, to offset every additional $1,000 you borrow, you need to earn an additional $10,000 in income or your risk of nonpayment will rise.


All of these factors are, to some degree, within borrowers’ control – which career path you choose after school, which school you enroll in, and whether you choose a very expensive school or a cheaper option are all up to you, even if which schools accept you, how much financial aid you’re offered, and who ultimately hires you are all outside of your direct control But Lochner and Monge-Naranjo also found a range of factors wildly outside of student borrowers’ control, some of which mattered more than the above. For instance:


Whether your mother went to college. In a regression analysis that controlled for race, SAT score, and parental income, the researchers found that students whose moms didn’t go to college ended up borrowing about $1,500 more, and owed more on those loans 10 years out. However, they note that these borrowers do not have significantly higher default or nonpayment rates than borrowers whose mothers did go to college.


Whether you are a woman or a man. The authors note that women’s “significantly lower post-school earnings” translates into higher nonpayment rates. Women owe more on their loans 10 years after graduating. While men and women have “nearly identical” default rates, according to the paper, “women have defaulted on 80% more debt than have men.” And yet it’s very important to note that once you control for the amount of money men and women make, this gap shrinks and becomes statistically insignificant – confirming that it’s the differential in pay, not some other factor, that leaves women owing more.


Whether you are white, black, Hispanic, or Asian. “On average,” they write, “black borrowers still owe 51% of their student loans 10 years after college, while white borrowers owe only 16%. Hispanics and Asians owe 22% and 24%, respectively.”  These are among the most significant findings in the paper, and they’re worth quoting in full:


Among the individual and family background characteristics, only race is consistently important for all measures of repayment/nonpayment. Ten years after graduation, black borrowers owe 22% more on their loans, are 6 (9) percent more likely to be in default (nonpayment), have defaulted on 11% more loans, and are in nonpayment on roughly 16% more of their undergraduate debt compared with white borrowers. These striking differences are largely unaffected by controls for choice of college major, institution, or even student debt levels and post-school earnings. By contrast, the repayment and nonpayment patterns of Hispanics are very similar to those of whites. Asians show high default/nonpayment rates (similar to blacks) but their shares of debt still owed or debt in default/nonpayment are not significantly different from those of whites. This suggests that many Asians who enter default/nonpayment do so after repaying much of their student loan debt.


Importantly, the researchers did control for different college majors, different SAT scores, and different post-school earnings for each racial group. They conclude: “While blacks have significantly higher nonpayment rates than whites, the gaps are not explained by differences in post-school earnings – nor are they explained by choice of major, type of institution, or student debt levels.”


What does explain them? Lochner and Monge-Naranjo don’t have satisfying answers. They speculate that it all comes back to how much money mom and dad have. If your parents can help you out – with both cold, hard cash, and sound financial advice — you’re a lot less likely to end up in nonpayment. The researchers found that every $10,000 increase in parental earnings equated to about $250 less in student loans for their children. And an earlier study by Lochner and colleagues of Canadian students with low post-school earnings found that financial support from their parents was instrumental in keeping students out of default. But one thing that’s not in the data is how much wealth parents have beyond their earnings, which could have important racial implications – previous studies have shown that even when blacks and whites make the same salary, black families still hold less wealth.


With student loan debt at crisis levels, Lochner and Monge-Naranjo’s findings add important nuances. This is information that government leaders and lenders need to pay attention to as the debate over regulation heats up – and that students need before they make possibly the biggest financial decision of their lifetimes.




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 08:00

3 Problems Talking Can’t Solve

Constructive conversations are a vital part of any leader’s job description. But the importance of conversation and communication as a leadership skill is something that can often go unexamined. There is now extensive evidence that shows there is a time and place for conversation — and that any leader or aspiring leader would likely benefit from a more serious consideration of the pitfalls of some types of dialogue.  Critically, the nuances that lie within and around conversations are often as important as the conversations themselves.


Let’s look at three situations in which conversations, per se, may not be the answer to effective leadership:


“All talk and no action”: Conversations may create the illusion that something is being done or that one is progressing when all that is being done is communication without the necessary action. This is something we all have experienced and struggle with at work — unaware that there is scientific evidence that helps explain what leaders can do. As early as 2004, Margaret Archer, a well-respected researcher, explored the concept of how one’s actions are impacted by one’s “reflexive” nature. She defined three types of reflexives: “communicative reflexives” are those who require others to complete their internal conversations to enable successful performance; “autonomous reflexives” do the opposite — they shut themselves off from others in the completion of their internal conversations and are much more strategic; and “meta reflexives” are those who use strongly held values to guide their reflexivity. Archer concluded that when you depend on others to help you complete your conversations, you are more likely to remain unmoved within the organizational hierarchy whereas those who rely on their own internal conversations move upward and meta reflexives move laterally within organizations.


While no one style is likely to be exclusively present in any person, this study points to the value of moving in and out of conversation. Specifically, conversation draws our attention outward and creates a feeling of movement, but is often ineffective unless one takes time to translate this experience into an insight and inner conversation that moves one upward within an organization, or when one wants to encourage others to advance as well.


Heightened emotional sensitivity: Although conversations that progress exclusively through mutual understanding and emotional connection can be helpful when forming teams, they can also be very destructive in negotiations particularly. We may be in trouble if all we do is “feel” what another person is saying in order to understand them. A prior series of studies has demonstrated that this type of emotional sensitivity can, in fact, be detrimental to to the outcome of a discussion at the bargaining table. Instead, another form of empathy, cognitive empathy, may be more useful in discovering hidden agreements within the negotiation. This kind of conversation requires using one’s head as well as one’s heart when negotiating. Often, we are content to simply demonstrate that we understand how another person feels, but this is not enough. It pays, in the context of a negotiation, to actually view things from the other person’s point of view so that one can escape one’s own biases.


“Delusional” consensus:  Conversations are also often held in order to achieve consensus, but consensus on its own does not imply effective leadership. Humans as a group are prone to multiple illusions, distortions and psychological traps, and having a consensus about these may lead to mass delusion rather than actually effective leadership. One need not even go as far as Hitler or apartheid to make this point. We are all prone to falling into psychological traps, and if we were all prone to these cognitive biases, we might have consensus but be completely wrong.


So what can we do if we are subject to these situations? First, Archer’s research would suggest that all conversation should be followed by internal reflection (see “autonomous reflexives” above) to avoid stagnation. Feed conversational data to yourself consciously and allow your own brain to process this deeply. Secondly, the studies on cognitive empathy suggest that we should not get carried away by sharing emotions during negotiations, but also share points of view. And the fact that we may all fall into psychological traps suggests that decisions need to transcend consensus and may need to be implemented as “best guess” alternatives.


These studies suggest that being an effective leader requires being conversational as well as internal; using your head and your heart; and at times, acting against the consensus of the group. The next time you’re drawn into a conversation, see if you find yourself falling into any of these traps.


 




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 07:00

The Cable Guys Need to Come up with a Better Argument

A couple weeks ago, at the annual industry hoedown known as The Cable Show, National Cable and Telecommunications Association CEO Michael Powell argued against those who think broadband internet access should be regulated as a public utility:


The intuitive appeal of this argument is understandable, but the potholes visible through your windshield, the shiver you feel in a cold house after a snowstorm knocks out the power, and the water main breaks along your commute should restrain one from embracing the illusory virtues of public-utility regulation.


Powell then went on to contrast these grim images with the shiny world of the internet:


Because the internet is not regulated as a public utility it grows and thrives, watered by private capital and a light regulatory touch.


It’s not too hard to find nits to pick here. Cable internet service goes out in snowstorms, too, and if the latest American Customer Satisfaction Index is to be believed, Americans are much happier with their energy providers (who score a respectable 76 on the index) than their internet providers (a dismal 63, absolute worst of the 43 industries tracked). Potholes are indeed a pain, although unless Powell is proposing to turn every last cul-de-sac in America into a toll road, I’m not clear what his private solution would be. As for the water-main breaks, a lot of them are caused by guys digging up roads to lay cable.


Still, Powell is right that broadband networks are still growing and improving, while the U.S. electricity, transportation, and water networks mostly aren’t. The most obvious explanation for this is that broadband providers are serving a new and burgeoning market. The power, road, and water networks had their boom times, too — and while they are now older, with limited growth prospects, this would also be true if they were unregulated and entirely private. So the question is really whether the current, mostly hands-off regulatory regime for broadband has led to faster growth and more investment than if Powell, as chairman of the FCC, hadn’t decided in 2002 to classify cable broadband as a lightly regulated “information service” instead of a common-carrier “telecommunications service.”


The best answer to this I can come up with is that while I really don’t know the answer, Powell and the cable industry make an awfully weak case for yes. There is economic evidence that some kinds of regulation depress infrastructure investment. If the government is going to restrict how much money you can make, you’ll probably invest less. But since the mid-1990s, telecommunications regulation in the U.S. has been aimed mainly at encouraging competition, not telling providers how much they can charge. For several years starting in 1998, for example, the FCC made the then-dominant providers of broadband service, the phone companies, open their DSL lines to competing internet service providers. The reasoning was that, without regulatory pressure, monopoly owners of communications networks tend to charge too-high prices that depress use of their networks and end up crimping innovation and economic growth. Opening up parts of their infrastructure to upstart competitors was a way to counteract that tendency.


Most other developed countries have continued to force broadband providers to open the “last mile” into consumers’ homes to competitors. Since Powell’s reign at the FCC, the U.S. has instead banked on head-to-head competition between the cable companies and the telcos, each with their own last-mile wires. But cable broadband is markedly superior to DSL, and Verizon’s and AT&T’s plans to wire the nation’s homes with even-faster fiber optic cable haven’t amounted to all that much, at least not yet. So the cable industry has become the dominant provider of wired broadband, and in most of the country competition is muted. Some argue, as Brendan Greeley did in Bloomberg BusinessWeek a few months ago, that the U.S. approach has been shown to be a mistake because the broadband rollout here has lagged that of countries that do have last-mile rules. I’m not entirely convinced by this evidence — international comparisons are difficult, and the race isn’t over yet. But it’s a lot more compelling than the cable industry’s case that we in the U.S. are living in the best of all possible broadband worlds.


The evidence trotted out by the NCTA (the cable lobby) consists mainly of the impressively large quantities of dollars that cable companies have been spending on infrastructure. But as Matthew Yglesias showed last week, that spending appears to have been declining in recent years. In response, the NCTA trotted out a new chart going all the way back to 1990, but even that showed flat spending at best over the past few years.


This piqued my interest, and I thought it might be instructive to adjust the numbers for inflation and annotate the resulting chart with a few regulatory and competitive landmarks, to see if any patterns jumped out. I asked the NCTA for the numbers underlying the chart, and was told I should get them from the source: media and communications research firm SNL Kagan. SNL Kagan told me they couldn’t give out “that many data points,” and seemed a little peeved that the NCTA had. So I settled for eyeballing the NCTA’s chart for the numbers, and adjusting them using the GDP deflator for nonresidential fixed investment. Here’s what I came up with:


causeandeffect


The main story this chart tells is of a big rise in cable infrastructure spending in the 1990s, then a massive binge during the Internet bubble, then a hangover, and for the past decade a slight upward trajectory with some swings that seem related to the ups and downs of the overall economy.


This doesn’t reflect poorly on the cable guys at all. They’re spending a lot more money than they were in the 1990s, and seem to have gotten past the boom-bust ways of the early Internet to a steady investment plan. But neither does the chart back up any particular argument about the impact of regulation on capital spending. A 1992 cable law that the industry said would depress investment was followed by … a big increase in capital spending. That increase probably had a lot to do with the arrival on the scene in 1994 of satellite television purveyor DirectTV, the first direct competitor most cable companies had faced. The FCC’s 1998 ruling that broadband internet was a common-carrier “telecommunications service” did not visibly discourage spending by cable companies trying to get in on the broadband party. And then, when the FCC decided in 2002 that cable broadband wouldn’t be regulated like that, there was no discernible spending boom — at least not for quite a few years.


When I looked at a single company, industry leader Comcast, the story was one of a long, steady decline in cable investments as a percentage of cable systems revenue over the past decade and a half (I calculated it that way to reduce distortions from the acquisition of NBC Universal in 2011):


Comcast investment chart


What is Comcast doing with its money instead? It’s been making big acquisitions and giving lots of cash back to shareholders. In 2002, Comcast didn’t spend a penny on share buybacks or dividends; in 2012, buybacks and dividends added up to $4.6 billion, only a little bit less than the $4.9 billion in cable capital expenditures. Last year the company dialed back a bit on buybacks, with just under $4 billion returned to shareholders, but the trend is clearly upward.


This is not a bad thing — it’s what big, maturing companies do. But if Michael Powell wants to keep arguing that his broadband deregulation has unleashed a huge wave of investment and innovation in the U.S., he needs to come up with some better evidence than this.




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 06:00

Where the Jobs Are: Fixing Those Crumbling Roads

With America’s roads, bridges, seaports, and water systems in disrepair and skilled workers heading toward retirement, jobs maintaining the nation’s infrastructure will be wide open in the next decade. The Brookings Institution says nearly a quarter of infrastructure workers will need to be replaced, but beyond that, employment in infrastructure jobs is projected to increase 9.1% from 2012 to 2022. That means an additional 242,000 material movers, 193,000 truck drivers, and 115,000 electricians, Brookings says.




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 05:30

How Boards Can Innovate

Governing boards might seem like the last place for innovation. They are, after all, the company’s steadfast guidance system, charged with keeping an even keel in rough waters. Corporate directors are the flywheel, the keeper of the flame, the preserver of tradition.


All that is true, or least should be so, but companies are also forever having to reinvent themselves — IBM, Nucor, and Wipro bear only the faintest resemblance to their founding forms — and boards ought to be at the forefront of those transformations, not rearguard or resistant. New products are, of course, the province of R&D teams or research partners. But new strategies and structures are squarely in the board’s domain, and we have seen any number of governing boards innovating with, not just monitoring, management.


If boards are viewed as partners with management, not just overseers, innovative ideas are as likely to come from their dozen or so directors — all highly experienced and certainly dedicated to the firm’s prosperity — as any dozen employees of the company. Some boards have taken the principle further by forming their own innovation committee. The directors of Procter & Gamble, for instance, have established an Innovation and Technology committee; the board of specialty-chemical maker Clariant has done the same; and Pfizer has created a Science and Technology committee.


The value of a board’s active engagement in innovation can well be seen at Diebold, a $3 billion-company whose 16,000 employees make ATMs and a host of related products. Founded in 1876, the company had survived far longer than most major manufacturers because of a readiness to embrace new technologies — virtually none of its products today have any resemblance to those of 100 years ago — and its directors hope to ensure that the company incorporates new technologies to survive another 100 years.


To that end, Diebold recruited a new CEO in 2013, Andy W. Mattes, who had previously led major divisions at Hewlett-Packard, Siemens, and other technology-laden companies. And then, in conducting its annual self-evaluation, the board found that a number of its directors had recommended that a board committee be created to work explicitly with the new CEO on technology and innovation — not to manage it, but to partner with management on it.  With the concurrence of the new CEO, the directors created a Technology Strategy and Innovation committee with a full-blown charter requiring its directors to “provide management with a sounding-board,” serve as a “source of external perspective,” evaluate “management proposals for strategic technology investments,” and work with management on its “overall technology and innovation strategy.”


The chair of the new three-person committee, Richard L. Crandall — the managing partner of private-equity firm Aspen Partners, who also runs a roundtable for software CEOs and is a former CEO himself — was mindful of the lurking risk that directors might stray into the weeds and step on management prerogatives. He accordingly worked out an explicit understanding among the CEO and his committee members on where the directors should and should not go. “I watch like a hawk,” he said, “to ensure we do not go too far.”


Diebold’s innovation committee members are on call for everything from brainstorming to networking. When Diebold executives began looking for new technologies it might buy, Crandall and his two colleagues — rooted in tech start-up and venture capital communities — helped the CEO and his staff connect with those who would know or own the emergent technologies that could allow Diebold to strengthen its current lines and buy into the right adjacent lines.


Innovations at the top extend even to how the board itself operates, and Blackstone Group — one of the leading investment groups in the world — has been pressing the case. Sandy Ogg, an operating partner in Blackstone’s Private Equity Group, had previously served as a senior vice president for leadership and learning at Motorola and chief human resource officer at Unilever. Having thought a lot about what makes for effective company leadership, whether in the executive ranks or around the board table, Ogg wants to know if the directors of an investment prospect for Blackstone bring a profile that is complementary to their CEO’s, “filling holes that need to be plugged.” He wants to know how prospective directors will react if a CEO tells the directors to get lost. And at companies where Blackstone has invested, Ogg presses directors to “do the work” and not just be a “business tourist.” In other words, Blackstone has been innovatively working to get more out of their boards than traditional norms might have allowed.


Innovative companies that are not innovating in and around the board room run the risk of becoming less so. For example, we are familiar with the boardroom of one of America’s premier technology makers, which is dominated by a non-executive chair who underappreciates how vital but difficult it is to create new products in its recurrently disrupted markets (the innovator’s dilemma). The board has too few technology-savvy directors, and its nomination committee has blocked suggestions for more experienced innovators on the board.


Without innovation at the top in how boards lead, companies may come to see less innovation from below. Viewed affirmatively, directors who learn to work with executives on product and service innovations constitute an invaluable — and free — asset during an era when creativity is increasingly at a premium.  And for that, observed David Dorman, former AT&T CEO and now board chair at CVS Caremark Corporation, “we need a robust set of thinkers on the board who know the market place.” With that, the board can take responsibility for ensuring that its enterprise transcends the ever-present dilemma of innovating or dying.


Dennis Carey, Ram Charan, and Michael Useem are offering a two-day program on “Boards That Lead” at Wharton Executive Education on June 16-17, 2014.


More blog posts by Michael Useem, Dennis Carey and Ram Charan



When Innovation Is Strategy

An HBR Insight Center




How Samsung Gets Innovations to Market
When to Pass on a Great Business Opportunity
Is It Better to Be Strategic or Opportunistic?
Your Business Doesn’t Always Need to Change




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 05:00

May 20, 2014

What Data Journalists Need to Do Differently

The role of the data journalist has increased dramatically over the last decade.The past few months have seen the launch of several high-profile “data journalism” or “explanatory journalism” websites in the U.S. and the UK – such as Nate Silver’s recently relaunched and somewhat controversial FiveThirtyEight; Trinity Mirror’s ampp3d, a mobile-first site that publishes snappy viral infographics; The Upshot from The New York Times, which aims to put news into context with data; and Vox, where former Washington Post blogger Ezra Klein leads a team that provides “crucial contextual information” around news. The debates (pro and con) around these projects have brought data journalism out of its niche in digital media conferences and trade publications into the limelight.


These new media outlets have been received with both praise and criticism. Guardian journalist James Ball, who has been closely associated with the use of data for journalism – from his work with Wikileaks to the “Offshore Leaks” investigations – recently offered an interesting analysis of these developments. He points out a number of limitations in many of these data journalism projects — from the lack of transparency about their data, to the perpetuation of gender inequality among media professionals (“still a lot of white guys”), to the conspicuous absence of one of journalism’s most essential functions: the breaking of news.


But I think one of most important issues that Ball touches on is how journalists source their data. He points out that the recently launched high-profile data journalism outlets are using common data sets from established sources, such as government statistics and surveys from polling companies, rather than making an effort to find or generate their own data.


For decades, media scholars have been scrutinizing which views and voices are privileged, and which are neglected in media coverage. Decisions about who and what gets attention are intimately connected to sourcing practises: what sources journalists consider to be credible, how they prioritise them, and what they do with the information that they have sourced. While data journalism’s advocates promise a turn from opinion to evidence, anecdote to analysis, and punditry to statistical predictions, what matters is not only the nature of the source information (e.g. whether it is an interview or a database), but also where it is from, how it was produced, and what it is for. So far, my own research on data operations at major media outlets has confirmed the fact that data journalists tend to rely heavily on a small number of established sources: mainly government bodies (such as national statistical agencies or finance departments), international institutions (such as the EU, the OECD or the World Bank) and companies (such as audit or polling companies).


Why does this matter? How could things be different? And why should media organizations consider “making their own data” as journalist Javaun Moradi urged us to do back in 2011 (and as Scott Anthony argues here)? In a democracy, when the function of the media is to maintain the flow of information that facilitates the formation of public opinion, surely a multiplicity of voices, viewpoints and arguments need to be represented. However, not all members of society have the same level of access to the media, or the same resources to compete for media attention. Hence, when journalists are building their stories exclusively around existing data collected by a small number of major institutions and companies, this may exacerbate the tendency to amplify issues already considered a priority, and to downplay those that have been relegated or which aren’t on the radar screens of major institutions.


While data-driven reporting and investigations focused around existing collections of data from established organizations are of utmost importance for holding the powers that be accountable, data journalists should also strive to be critically aware of how established sources frame, shape, bias, and color different issues. Moreover, data journalists should strive to go beyond established sources to find or create their own data in order to bring about fresh reflections and insights or to bring new issues to the public’s attention.


By way of example, there are some promising finalists from this year’s Data Journalism Awards, for which I am a juror. For instance, collections of news articles can prove to be an invaluable source of data about an issue when no official monitoring or statistics exist to document it. Consider “Mediterranean sea, grave of migrants” — an investigation of unprecedented scale into the deaths of migrants seeking refuge in Europe by way of the Mediterranean Sea. One of the main sources of data for this project is a handpicked collection of news articles going back as far as 1988, maintained by one journalist. In the absence of any comprehensive official monitoring or statistics about these tragic events, a group of journalists have taken it upon themselves to build a comprehensive database of these deaths and their circumstances to support and improve policy-making around the issue of the treatment of undocumented migrants in Europe, which gestures toward another essential function that data journalism can play in society.


image 1


Social media data serves as another important source of insight into culture and society. Whereas journalists typically use social media to identify documents and human sources, as well as to communicate with others, the nonprofit investigative outlet ProPublica turns to the microblogging service Sina Weibo — otherwise known as “China’s Twitter” — to give greater insight into censorship in China by analyzing collections of images shared on the platform. ProPublica monitored 100 accounts that had been censored in the past over a period of five months, regularly checking which images posted from those accounts had been deleted. The result is an interactive application which gives “a window into the Chinese elite’s self-image and its fears, as well as a lens through which to understand China’s vast system of censorship.”


image 2


Another finalist that shows the potential of social media in emergency situations is “The Westgate attacks: A story of terrorism, citizen journalism, and Twitter”. A group of students at the University of Amsterdam used the collection of tweets around the unfolding of this catastrophe to better understand both the event itself and the role of Twitter in mediating such incidents. The result is a series of visualizations which allow the user to explore different layers of the attacks, from the people involved to the context in which it played out.


By far, the most ambitious, complex, and innovative shortlisted entry in terms of sources of data used is the Japan Broadcasting Corporation’s special program “Disaster Big Data”. This massive project gathered data (from government, business, and social media) on everything from driving records collected through car navigation systems; to location information from mobile phones; to tweets; to police vehicle sensor data; to business transactions data to better understand the impact of the major earthquake and tsunami that hit Japan in March 2011. Although visualizing “big data” is a challenging task — and there is room for improvement in the visual presentation of the results — this data collection and analysis exercise of unprecedented scale in disaster situations has had an impact on how governments, businesses, and medical institutions think about disaster prevention systems in Japan and should certainly be a source of inspiration that media outlets (and governments, for that matter) can turn for data in disaster situations.


Finally, in terms of giving voice to neglected topics and groups of people, two additional stories cannot go unmentioned. The first one is the Center for Public Integrity’s “Breathless and burdened: Dying from black lung, buried by law and medicine”, a story of the denial of benefits and medical care for sick miners from central Appalachia. This topic was largely overlooked before the investigation — the only publicly available information being judges’ opinions on court cases related to benefit claims. The second is the International Consortium of Investigative Journalists (ICIJ)’s story about the use of tax havens by China’s wealthiest and the “red nobility>,” part of a much larger project on offshore leaks.


image 3


While there are countless other entries to this year’s Data Journalism Awards that excel in terms of presentation and storytelling, hopefully some of the projects highlighted in this post will help to inspire and stimulate journalists’ imaginations concerning what counts as a source of data and how to creatively broaden the body of evidence that informs their stories.



Persuading with Data

An HBR Insight Center




That Mad Men Computer, Explained by HBR in 1969
Decisions Don’t Start with Data
How Data Visualization Answered One of Retail’s Most Vexing Questions
The Case for the 5-Second Interactive




 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2014 09:00

The Peril of Untrained Entry-Level Employees

Just-released findings of the Accenture 2014 College Graduate Employment Survey offer good news and bad news for employers of entry-level talent. First the bad news: most of those employers aren’t doing much to provide their new hires with the training and support they need to get their careers off to a strong start. More than half (52 percent) of respondents who graduated in 2012 and 2013 and managed to find jobs tell us they did not receive any formal training in those positions.


The good news is that, as young employees increasingly value career-relevant skills, and as awareness spreads more quickly of which employers provide good development training, there is a new opportunity for some employers to shine. By building a distinctive program for training new hires, and getting the word out about it, an organization today can gain an edge in the competition for top talent.


Why would it be that so many employers fail to provide formal training? There are all kinds of reasons. Training can be expensive and, as with many investments in highly mobile workers, the ROI is not always clear. In a time of high unemployment, it might be tempting to place the whole burden on employees to gain the skills they need, and quickly replace those who don’t. Some managers might even believe the best test of talent is to put people into unfamiliar settings and see if they can figure things out for themselves.


But bringing on new hires with the assumption that some will wash out  is hardly an efficient – or responsible – way to build a great team. Usually, grads arrive in workplaces with current technical knowledge they are eager to apply, but have a lot to learn about other aspects of succeeding in their new organizations. (This is why so many first jobs feature that seeming paradox by which new employees feel unchallenged by their tasks, while their managers perceive them to be overwhelmed by the responsibilities and professional demands of the “real world.”) Employers should recognize that there are certain skills that college graduates don’t already have when they walk through the door, that are readily trainable.


Graduates themselves are increasingly attuned to the value of work-relevant training. For example, compared with past years of the survey, we find the percentage of students choosing majors based on work prospects rising substantially (75 percent of 2014 graduates say they took into account the availability of jobs in their field before deciding their major, compared to 70 percent of 2013 graduates and 65 percent of 2012 graduates). Expect the best of them also to consider the availability of training as they choose among competing job offers. Indeed, eight out of 10 graduating seniors told us that they expect formal training from their employers.


Of course, the content of entry-level training matters, too. Even employers with programs in place may need to rethink what they are designed to teach and how well they are actually serving new employees. In broad terms, you should consider adding training to:



Put key productivity tools in their hands. Most recent grads are quick to embrace solutions that allow them to work remotely – many of which involve industry-specific software they have not encountered in school. Recognize that they do not want to be constrained by the walls of the office, and become more valuable when they can work more autonomously, but that they need detailed training to be able to do their jobs on-the-go.
Build on what they already know. Take the example of social media. Given that millennial employees are truly digital natives, you might not assume – and neither will they – that you have anything to teach them about it. But they do need to be coached on what the company expects of them as its “brand ambassadors,” and now not to run afoul of communications policies.
Fill in the bigger picture. Even the greenest hire performing the most clearly defined task will do it better if she understands the business of your business. Training people early to see how their work fits into the larger scheme will lead to greater collaboration, more sharing of ideas, and deeper commitment to the mission of the organization.
Lay the groundwork for future contributions. Here, a good example might be early training in data analysis and visualization. Eventually, every profession will be touched by Big Data and the need to glean insights from consumer, customer, and employee behavior. If there is a future area of strength you know the business will need in general, plant the seeds in entry-level training, whether a trainee’s first job requires it or not.

By improving the processes for cultivating your newest and least experienced workers, you can remove much of the risk in your talent pipeline. Currently, nearly half of those who graduated within the last two years (46 percent) say they consider themselves “underemployed” and working in jobs that do not require their college degree, and more than half (56 percent) report they do not expect to stay at their first job more than two years – or that they have already left their first job. Such attrition represents an unnecessary setback for the employers who will have to go through the expensive process of finding and attracting promising young talent again. To “future proof” your business, you need to fill and maintain a pipeline of people steadily gaining experience and advancing toward leadership roles.


Finally, if you do invest to make your training of new hires better than average, be sure to figure that into your discussions with candidates. For top candidates, a company offering a strong talent development program and showing real dedication to new hires’ career advancement is very positively differentiated.


Emphasize your commitment to training in your corporate social media activity, too – and pay attention to how it is being talked about. If leaving new employees to sink or swim was ever a good option, it surely isn’t now in an age when new grads communicate their experience so richly and transparently to their networks. Neglect the training they need and want, and the word will get around quickly.




 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2014 08:00

Strategy’s No Good Unless You End Up Somewhere New

Innovation isn’t always strategic, but strategy making sure as heck better be innovative. By definition, strategy is about allocating resources today to secure a better tomorrow. It is important, however, to understand the nuances and complexities of innovation as they relate to strategy. Here is my list of the four most important:


Not every industry is equally dynamic. Some industries are faster paced than others. The smart phone industry has gone through several disruptive changes in just a decade, whereas the steel industry’s technology shifts took place over a hundred-year period. Managerial “best practices” in a fast-paced industry don’t necessarily apply to everyone, everywhere.


Not all innovation is created equal. I put all innovations into two broad categories: linear innovations (which are consistent with the firm’s current business model) and non-linear innovations (not perfectly continuous with the current business model). But we need to add another layer of complexity to those categories: innovations can be incremental or radical. To illustrate, consider the Tuck School of Business, my employer. A linear, incremental innovation would be if professors from different disciplines co-taught a course. A radical (but still linear) innovation would be a major overhaul of the two-year MBA curriculum. If we were to fundamentally change our business model by offering an online MBA, that would be non-linear. Not only is there no well-understood process for creating such a program, but doing so would require Tuck to build an entirely new set of capabilities.


Execution is essential to successful innovation and strategy.  Innovation is about commercializing creativity. If a firm is not making money with an idea, there is no innovation. The real challenge lies in the long, frustrating journey toward converting an idea into a fully scaled up profitable business. Moreover, this isn’t always about coming up with new products and services. We tend to think of a shiny new product offering when we picture “strategic innovation,” but that’s too limited. Apple has disrupted several industries using new business models, not new technologies. And Toyota changed the auto industry forever with a systemic process innovation (the lean production system).


Finally, innovation (and hence strategy) is not just the CEO’s job. There are two significant problems if the firm’s leader is the only one worried about strategy. First, strategy is about adapting to change – and the people at the bottom of the organization are closer to customers and the competitive environment than the CEO. Second, the company needs to selectively forget the past as it invents the future. The CEO will have the most difficulty in forgetting, especially if the CEO was responsible for creating the status quo. The people at the bottom of the organization not only are closest to the future but they have the least vested in the firm’s history.



When Innovation Is Strategy

An HBR Insight Center




Is It Better to Be Strategic or Opportunistic?
Your Business Doesn’t Always Need to Change
Should Big Companies Give Up on Innovation?
How GE Applies Lean Startup Practices




 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2014 07:00

Why America Is Losing Its Entrepreneurial Edge

The U.S. is one of the few nations on earth where private, for-profit business formation is seen as a quasi-heroic act. The resulting entrepreneurial culture has captured the world’s imagination and driven the nation to great prosperity. Yet now it is clearly faltering.


In a new paper that’s already generated much discussion, economists Ian Hathaway of Ennsyte Economics and Robert Litan of the Brookings Institution document four decades of  “Declining Business Dynamism in the United States.” Looking at data from all fifty states and all metropolitan areas, Hathaway and Litan conclude there’s been a secular decline in business formation throughout the country, with a concurrent increase in business dissolution. The rate of business formation in 2011 was almost half of what it was in 1978, with the rate of dissolution somewhat higher than the past couple decades. When they restate this another way, the implications are clearer: “Whatever the reason, older and larger businesses are doing better relative to younger and smaller ones.” Deep, disruptive economic change is all around us, but the data indicates that the national response has not been, contrary to our myths and history, one of increased entrepreneurship.


Hathaway and Litan stay close to the data in this work and stop short of speculating about causes of this trend. So allow me. While there are numerous factors in such a massive shift away from business formation, one of the most powerful has to be the consolidation of multiple economic sectors toward a handful of firms with hegemonic power over their industry. Much of this is driven by the needs of the financial sector, which itself has consolidated massively. This paper by the Richmond Fed shows how from 1960 to 2005, the U.S. financial services sector went from 13,000 of independent banks to half that number, while the top ten banks grew from 20% market share to 60%. As of 2013, the top ten banks had 70% of the market.


Consolidation of the financial sector has led to similar dynamics in other industries. In pharmaceuticals, the largest company, Pfizer, is the result of decades of mergers. The current corporate entity is comprised of firms that used to be called: King Pharmaceuticals, Wyeth, American Cyanamid, Lederle, Pharmacia, Upjohn, Searle, SUGEN, Warner-Lambert, Parke-Davis and others. In chemicals, energy, technology, beer and more, you can see a multi-decade trend toward the consolidation of behemoths. In the guitar business, too.


How does this consolidation impact entrepreneurs? Giant firms seek the services of similarly large vendors. New, small entrants into the market will be at pains to form relationships with such firms, and the power imbalance is effectively a monopsony — sell to us at our price, on our invoice terms, or get lost. Trying to sell into a world of enormous corporate cartels is considerably more difficult than it was forty years ago, when every sector in America was smaller, more diverse and more dynamic.


Also, consider the need for new products and services in a country full of concentrated industries. When a company had dozens of potential competitors in various geographic regions, there was an incentive to innovate before the other guy does. In a concentrated market, competitors are few, and growth may come more from mergers and government lobbying than new product lines. For entrepreneurs, why start something new in such an environment? The current tech boom might serve as a counterexample, but consider that for most venture-backed companies, the ultimate exit plan is for sale of the firm to an existing behemoth, not continued independent operations.


The American entrepreneurial mythos arose in an environment that was perfect for supporting new businesses: rapid growth, technological change, constant competition, limited government intervention. We need to find ways to bring that environment back.




 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2014 06:00

Marina Gorbis's Blog

Marina Gorbis
Marina Gorbis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marina Gorbis's blog with rss.