Marina Gorbis's Blog, page 1371
August 26, 2014
Embargoes Work – Just Not the Way We’d Hope
In 1986, Mikhail Gorbachev became the only Soviet leader to grace the pages of Harvard Business Review when a speech he made to members of the U.S.–U.S.S.R. Trade and Economic Council was reprinted under the unassuming title, “Remarks on US-USSR Trade.”
It had been 14 years since President Nixon’s historic trip to Moscow marked the beginnings of détente between the two superpowers. And yet Gorbachev was still able to say, “The volume of U.S. imports to the U.S.S.R. is roughly equal to what your country imports from the Republic of the Ivory Coast.”
Such was the remarkable power of the trade embargo the U.S. had imposed 37 years earlier.
Perhaps no embargo has received greater scrutiny than the 1949 Export Control Act, which was imposed by the U.S. on the Soviet bloc at the dawn of the Cold War. As the U.S. and EU lob economic sanctions back and forth with Russia today, what do we know about the effect they may have? We took a look through our archive to find out how that first one went.
Go back to the start, and the first lesson we find is that, given sufficient political will, unilateral trade embargoes do stop trade, at least bilaterally.
Just as President Obama has done now, the Soviet embargo was actually imposed in stages. In effect it had already begun in 1948, when the federal government started rejecting license applications for shipments of goods to Eastern Europe in a systematic way. The act extended the requirement for export licenses to practically everything shipped behind the Iron Curtain, which the federal government could simply fail to grant. The idea was to bring about a virtual cessation of all trade between East and West.
How did that work out? Three years later, when Nicolus Spulber, an associate professor at MIT’s Center for Strategic Studies, published a detailed analysis in a 1952 HBR article, he finds it working perfectly, and just as the American public thought it was – at least between the U.S. and the Soviets. From 1948 through 1951, the amount the U.S. imported from the Eastern European countries dropped 95%, from $120.8 million to a mere $2.6 million. And the U.S.’s relative importance to the Soviets as a major trading partner plunged to zero.
But it may not surprise modern readers to learn that unilateral action by the U.S. didn’t have much effect on the rest of the world. Spulber finds that Western Europe imported slightly more in 1951 than it had in 1948, despite a U.S. threat to withhold post-war aid to any non-cooperative ally. Undeterred, Spulber finds Western Europe (particularly neighboring Scandinavia) continuing to import iron, steel, copper, lead, zinc, and tin from the Eastern Bloc, and to export heavy machinery, railway vehicles, motor vehicles, and even ships to it. And that was just the legal trade. Predictably, contraband flourished, just as it did during Prohibition. When the U.S. extended its embargo to China, Spulber finds the effort entirely futile, since trade was simply rerouted from Western Europe through Macao and Hong Kong.
The second lesson is that, as a tool of isolation, embargoes are frighteningly effective, isolating embargoer as much as the embargoee. This is what Harvard Law professor Harold Berman found when he examined the continuing effects of the embargo in a 1964 article entitled “A Reappraisal of U.S.–U.S.S.R. Trade Policy.” Fifteen years on, the embargo was still in place (if not quite so firmly, as the federal government was beginning to grant more licenses). But feelings toward it were becoming distinctly ambivalent.
In the previous year, the Soviets had purchased $500 million worth of grain from Canada, in the largest wheat sale in history. They’d offered to purchase $250 million worth from the U.S., as well – an amount, Berman says, that would have gone no small way toward addressing the country’s troubling balance of payments deficit.
Dithering on the U.S. side cut the actual deal in half, and the sale touched off a nationwide debate over whether the U.S. should be trading with the East at all.
“The springboard for the debate,” Berman says, “is the realization that our allies are in fact trading rather vigorously with the East, and there is virtually nothing we can do about it.” — a shocking measure of how well the embargo served to isolate not the Soviets, but the U.S., from the world economy. U.S. trade with the U.S.S.R. in 1962 amounted to $30 million out of a total import-export trade of $36 billion. “Thus it appears that our friends in Europe and Japan – including many subsidiaries of U.S. companies – are deriving considerable economic advantages from trade with Communist countries while we are biting our fingernails,” Berman concludes in exasperation.
The controversy was still raging two years later, as Wellesley economics professor Marshall Goldman recounts in a 1966 article, when the Rumanians approached several U.S. companies with an offer to buy a $10 million catalytic petroleum-cracking plant and a $50 million synthetic rubber factory. These would have been the first large postwar commercial contacts between the U.S. and Rumania. The Department of Defense, which had originally opposed the deal, reversed course when it finally occurred to government officials that if we did not sell the Rumanians this technology, others in Western Europe would. The Rumanians awarded the contract to Firestone.
It is a measure of the state of U.S. public opinion that Goodyear then claimed it had been offered the contract but had turned it down because, it said, “even to a dedicated profit-making organization, some things are more important than dollars.” The conservative Young Americans for Freedom picketed Firestone stores and threatened a massive public demonstration against the Rumanian venture. That was enough to prompt Firestone to withdraw. (Universal Oil Products did ultimately build a $22.5 million petroleum refinery factory in Rumania, perhaps aided by the fact that the company didn’t sell products to the general public.)
Contrast this with the European view of the time, which Berman recounts. “Virtually all Western countries except the United States believe in trade – except in military goods – with the Communist countries [emphasis in the original]. They believe it is ultimately to their own, and Western, political advantage,” he says. It’s important, they argued, as avenue of mutual communication. It makes Communist countries more dependent on the West. It contributes to a higher standard of living in the East, which is good politically since (and here he quotes British Prime Minister Alec Douglas-Home) “a fat Communist is a less aggressive Communist.” And “the stability of international relations requires, as a matter of principle, that countries refrain from waging economic warfare with other countries with which they are at peace.”
That’s the flip side of the argument Gorbachev makes 20 years later when he addresses the U.S.-U.S.S.R. trade council, three weeks after his fateful summit with President Regan in Geneva. If the Europeans were arguing that they wouldn’t engage in a trade war with a country with which they were at peace, Gorbachev is arguing for an end to embargoes and the normalization of economic relations between the U.S. and the U.S.S.R. as a path toward peace.
“Many U.S. businessmen are known for their well-developed spirit of enterprise, a knack for innovation, and an ability to identify untapped growth opportunities,” he says ingratiatingly. “I am convinced that today the best, genuinely promising possibilities of that kind are to be found not in the pursuit of destruction and death but in the quest for peace and in a joint effort for the sake of all countries and peoples.”
Ever since the spectacular success of the Marshall Plan in using mutual trade pacts to end more than two millennia of war between France and Germany, business and governments have put their faith in international trade as a stabilizing force. That view, argues former Clinton White House pollster Nicolas Checa and colleagues in the 2003 article, “The New World Disorder,” is based on two assumptions – “first, that a healthy economy and sound financial system make for political stability, and second, that countries in business together do not fight each other….As people liked to say, no two countries with McDonald’s had ever gone to war with each other.”
That of course is the ultimate irony of embargoes — they’re a tactic aimed at avoiding a fight by not doing business together. Certainly a trade war is better than a nuclear war. But in resorting to a trade war, we give up the only tool that’s ever been known to put an enduring end to actual war (and in this context, the shuttering of three Moscow McDonald’s – including the first one to open in 1990, at the end of the Cold War – is disturbingly symbolic).
Ultimately, the problem with embargos isn’t that they don’t work — as many assume today — but that they do, all too well. Back in 1986, 40 years after the end of World War II, Gorbachev described the U.S. and the U.S.S.R. as “economic giants fully able to live and develop without any trade with each other whatsoever.”
“I regard this as no economic tragedy at all,” he added sardonically. “Both of us will survive without each other, particularly since there is no lack of trade partners in the world today. But is it normal from a political standpoint?”
In hindsight, perhaps he may have had a different answer. But here’s what he said at the time:
“My answer is definitely and emphatically no. In our dangerous world, we simply cannot afford to neglect — nor have we the right to do so — such stabilizing factors in relations as trade and economic, scientific, and technological ties. If we are to have genuinely stable and enduring relationships capable of ensuring a lasting peace, they should be based, among other things, on well-developed business relations.”



August 25, 2014
What Can a Robot Bellhop Do That a Human Can’t?
Years ago, I worked briefly as a hotel bellhop, greeting guests, bringing luggage up to their rooms, and helping them haul it back down again when they checked out. It was social and dexterous work — hoisting skis, snowboards, bags of all sizes; navigating narrow hallways; making small talk and angling for a tip. In other words, the kind of thing that is supposedly hard to automate.
So I was intrigued to read last week about a robotic “butler” being tested at Starwood Hotels’ Aloft line, at its Cupertino location. The “Botlr” can deliver toothbrushes, razors, and similar items to guests’ rooms, replacing the need for human staff to do so.
I reached out to Aloft, and spoke with Brian McGuinness, Aloft’s global brand leader, to hear about the motivation behind the pilot. I expected the usual reasons for automation — cost-savings or increased precision or reliability. Instead, he told me that Aloft is betting that its customers would rather interface with a robot than a person, and that they’ll value proximity to the next big technology. And, of course, that it will free up staff to do more “human” work.
An edited transcript of our conversation is below.
Tell me about the origin of the idea to use a robot in the hotel.
Something like five years ago we created the Aloft brand to appeal to the tech-savvy, the early adopters, the next-generation traveler — the people who wait in line for the next smartphone to be released.
One of our key locations is Cupertino, so, essentially on the Apple campus. And part of our facility there is testing next-generation technology [like] Apple TV in guest rooms [and] keyless entry–the ability to use your smartphone to enter your room without having to go to the front desk to pick up the plastic key.
And Savioke, which is a robotics company, was reading about us and our push for technology around what the future of hotels looks like. They contacted us, and said, “We’re working on a robot, would you be interested?” And we said, “Absolutely.”
We’re starting a formal pilot this week. Over the last four to five months, we’ve worked [with] them on the design of the robot, the functionality of the robot, and really what the overall purpose was. And for us, it was really just to augment the team that’s there and our talent. So, if you call down and you need a razor or some toothpaste or some shaving cream, or a charger for your phone, how could we seamlessly get that up to the customer in an expeditious way, that we hadn’t been able to do in the past?
What will the Botlr be doing — and how will it know where its going?
With our help, [Savioke has] mapped the hotel. The robot is essentially going from the front desk, navigating through the lobby, onto the elevator — it actually has a two-way communication with the elevator system. So, calling the elevator, the elevator is telling it [that] it has arrived and the door is open. The Botlr is boarding the car, going up to let’s say, the fourth floor. The elevator says, “You are now on the fourth floor, the doors are now open,” the Botlr exits, goes to the room. Because of the mapping technology, it calls the guest room and says, “I’m out here, I have your delivery.” [The] customer opens the door, there’s a steel container or compartment, the lid pops open, the customer retrieves their item. The lid actually closes on its own, they get finished, thank you very much, and the Botlr returns and navigates back to the front desk. So, that’s where we are today. It has done many runs to guest rooms from the front desk and back.
What’s been the reaction from the staff?
It’s a relief. This isn’t going to replace associates or our talent. We don’t have doormen or bellhops. Just the front desk agent. Essentially, this is doing the tasks that they would have to leave the front desk and run upstairs [for]. Sending the Botlr on that journey simply means that they’re at the front desk serving a customer in a better way. You all have seen those clocks in windows or at the desks that says I’ll be back in five minutes. We’re essentially negating the issue.
And quite frankly, it’s better work. They’re working more closely with our customers from a personalization perspective of the guests’ stay.
What do you get out of it, in terms of cost or reliability, or what do customers get out of it?
A child’s wonder, I guess. You know, from R2D2 to Wall-E, to Rosie from the Jetsons. It’s just cool. And it’s really neat, and as we look at our associates and our talent and our guests in our hotel when they see it go by, it simply brings a big smile to everybody’s face.
I can imagine at any point when I check into a hotel, a way in which a robot or computer system could be the way that I interface with the hotel. Why this piece rather than the check-in process, or the checkout process, or something like that?
We’re looking at all processes. So, if we talk about keyless — which we have rolling out to our hotels now — which is your ability to make your reservation as normal, land in a city, get a text for your room number, and get a digital key to your smartphone that’s going to unlock your door. So you’re essentially not going to have to check-in. That to me is an amazing opportunity for our customers to experience.
That being said, that doesn’t mean the talent goes away. It means that the talent’s working on making certain that I have a non-smoking king-size high floor away from the elevator. And then I need dinner reservations. So the ability for technology to augment the experience at the hotel and to provide a guest experience that’s differentiated is huge for us.
Tell me what success looks like for Botlr. What would make you think, “This really worked, let’s see if we can bring this robot to some of our other hotels?”
Success to me would be the engagement of the customer and the traveler in our hotel and whether they see a value in it, whether they understand the fun nature of it, and if it’s helping with quicker service. And I think that’s about it. As long as the Botlr is moving around the hotel freely, it’s bringing smiles to customers’ faces, then I consider that a success.



You Can’t Do Strategy Without Input from Sales
One of the best books ever written about selling is David Dorsey’s The Force. Dorsey turns a year in a Xerox sales district in Cleveland into a riveting drama about people, accounts, the operatic highs and lows of the sales cycle, and the triumph of making quota. Dorsey focuses on Fred Thomas and his sales team and the sometimes strange but effective motivational techniques of his district manager, Frank Pacetta. It’s a great ethnographic study of B2B selling for capital goods.
But even as Thomas and Pacetta make their sales, Xerox is missing the larger strategic point, although the facts are staring at them in every office where Thomas and his team make sales calls: more and more copies are being handled by printers linked to personal computers, not by copiers. Thomas is doing his best to maintain Xerox’s share in copiers. But the disconnect between sales and strategy (in this case, a lack of strategy to deal with a technology that is redefining the market and customer behavior) is the hidden subtext of the book.
Even Dorsey, as great an observer as he is, misses it. Instead, he explains that by the mid-1990s Xerox competed with Canon, Kodak, Minolta, Ricoh, Savin, and other copier manufacturers, without mentioning HP, Brother, and other makers of computer printers that were eating Xerox’s lunch. It makes Dorsey’s summation of his story a non sequitur: “A once-thriving American business loses share to the Pacific Rim, gets scared, adopts TQM practices, raises productivity, and begins to win back business. The way the Cleveland district sells copiers illustrates . . . this comeback.” No. How could it be when selling, however clever and creative, is divorced from the main strategic reality facing the firm?
Twenty years later, the real lesson of the Xerox story may seem obvious. But this disconnect between strategy and sales is costly, dangerous – and pervasive. Selling is, by far, the most expensive part of implementation for most firms. Yet, relatively few strategies—some studies indicate less than 10%–carry through to successful execution and, on average, companies deliver only 50-60% of the financial performance that their strategies and sales forecasts promise. That’s a lot of wasted effort and money. Similarly, a recent survey of more than 1,800 executives across industries found that their biggest challenges are ensuring that day-to-day decisions are in line with strategy and allocating resources in a way that supports strategy.
What’s the problem? One big problem is that in business schools, daily practice, and strategic planning, sales and strategy are treated, as in Dorsey’s book, as separate worlds. In academia, there is remarkably little written about how to link strategy with the nitty-gritty of field execution. Few of the many, many books and articles on strategy formulation have much, if anything, to say about the role(s) of a company’s sales channels in executing strategy. In fact, sales advice, if it’s even discussed, usually revolves around a combination of “reorganizing” and “incentives.” But there’s no one best way to organize, and sales reorganizations are always costly and risky because they disrupt established call patterns and client relationships. And appropriate incentives are a necessary but not sufficient cause of getting field behaviors to align with company goals. You ultimately can’t substitute money for management.
What does exist in practice is a vast trade lore (most of it anecdotal but some grounded in good research), mainly from consultants and trainers who believe in a particular selling approach. But they also treat selling in isolation from strategy, and so the focus of much sales training can have a perverse effect: people work harder but not necessarily smarter.
Finally, the planning process in firms generates a disconnect. About two-thirds of companies treat strategic planning as a periodic event, typically as part of the annual capital-budgeting process. Companies tend to do plans by P&L unit, even when Sales (for good reasons) sells across those units. The average corporate planning process takes 4-5 months per year. While this is going on, the market is doing what the market will do, and sales must respond issue by issue and account by account. In other words, even if the output of planning is a great strategy (a big if), the process itself often makes it irrelevant to sales, which is responsible for executing strategy where it counts most: in daily interactions with customers.
Linking sales efforts with strategy is vital for profitable growth and must be a two-way street. In any business, value is created or destroyed in the market with customers, not in planning sessions or training seminars. Without credible sales input, any strategy runs the risk of dealing with yesterday’s market realities, not today’s. Conversely, daily selling efforts—successful or unsuccessful, smart or stupid—constrain and redirect strategies in often unintended ways. Selling in your firm can’t generate sustained returns if it’s not linked to your strategy.



Recruiting Data Scientists to Do Social Good
We know that data scientists are a hot commodity. Businesses can’t get enough of them. That’s great for tech companies that attract talent with stock and benefits, but less so for social initiatives and non-governmental organizations (NGOs) who could use their talent too. Short of asking nonprofits to drain their coffers to make expensive hires, can we find a way to staff their projects? I think so, if we can create a better mechanism to connect people to opportunities.
The going rate for data scientists has obviously soared. It’s a far cry from the labor market in place when I first got hooked on data some 20 years ago. When I arrived in Boulder Colorado as a 21-year old computer science exchange student from East Germany, I had one overstuffed suitcase to my name and a place on the dorm room waiting list. Email was virtually unheard of, and I certainly didn’t envision the day I’d be able to Skype with my family back home. My enrollment in a course on “artificial neuronal networks” (taught by a man who would later become Amazon’s first Chief Scientist, as well as my good friend and mentor) made me a particularly strange kind of geek.
The dramatic change in compensation since then is not the only reason that social organizations struggle to get good data scientists. To attract them and keep them engaged, a workplace also has to offer several less tangible things: a community of data scientists to work with and bounce ideas off; an adequate computing environment; ready access to data (without red tape); and access to the details of how the data was collected. Unfortunately, few NGOs can provide any, never mind all, of these things.
Where does this leave social organizations? For the most part, high and dry. Yet we also know that data scientists, like the smartest people in every field, want a sense of purpose. Many hope to apply their minds beyond financial modeling, product recommendations, ad placement, or even disease analysis to do more to make the world a better place. Most are in this game for the fun and challenge. They love hard puzzles and massive data sets – and there is no shortage of these in the social realm. And thus we’re seeing a grassroots movement of data scientists volunteering, after their day jobs, to work on projects in the public interest.
How do they connect with these projects? Occasionally it’s through their for-profit employers. For example, SumAll, a business analytics software vendor, has established its own nonprofit arm so it can “translate the company’s philosophy into tangible social impact.” Sometimes connections are made by other parties. Rayid Ghani, since being involved with the Obama campaign, has run a Fellowship on Data Science for Social Good that brings dozens of data scientists to the University of Chicago to work on analytics projects with nonprofits, local governments, and federal agencies.
Often the connections are made through “challenges” such as the KDD Cup, an annual competition hosted by SIGKDD. (Created by the Association for Computing Machinery, the acronym stands for “Special Interest Group on Knowledge Discovery and Data Mining.”) This year’s KDD challenge was to help the NYC-based DonorsChoose.org, which connects teachers who need specific classroom materials with willing donors. The nonprofit wanted a data-based way to discover what makes teachers’ proposals likeliest to attract funding. The top three solutions, to be announced this week, will win nominal prize money ($2,000 each, and typically donated to charity) – and the satisfaction of being the best of nearly 500 teams competing from around the world. The same motivations drive entrants to UN Global Pulse challenges, always designed to bring data-driven evidence to address problems affecting developing nations.
Hackathons are another format for setting data scientists loose on data problems brought to the table by social organizations. Along these lines, DataKind hosts what it calls “DataDives” – weekend events where pro bono data scientists work collaboratively with organizations to understand and solve problems they can’t meet with their own limited data science resources.
Finally, conferences themselves help make connections, as they always have. This year in particular, the flagship conference KDD 2014 has the theme of “Data Mining for Social Good.” With help from Bloomberg’s philanthropic department, it features speakers from nonprofits taking the stage to describe tough problems they face and invite ideas from the floor. It also extends a matching service traditionally provided to corporate sponsors (identifying data scientists with sought-after skills and providing space to interview them) free to NGOs. The hope is to make rich connections among the NGOs and the 2,000+ data scientists attending from both academia and industry.
Valuable and inspiring work is being done as a result of all these activities. They’re a great start. But to be honest, they target only a tiny fraction of interesting problems, and collectively deploy nowhere near the full capacity of the data science community to do good.
So this is the big question: How can we start connecting socially-minded data experts to important data problems at scale?
I’d suggest that what we really need is a year-round virtual marketplace (perhaps modeled after DonorsChoose) where data scientists can find NGOs whose needs are well-matched to the skills and time they can donate. In fact, some colleagues and I are already working with NYC-based Data Scientists LLC to test such a system.
Will it see the light of day? Will the time come when any data scientist with time to spare can immediately find a socially-valuable way to spend it, and when important social initiatives, focused on anything from local schools to global climate change, can draw easily on a reservoir of expert talent? It might sound far-fetched. But if you came of age, as I did, in an era before email, you’ve long since concluded that such amazing things are possible.



Employers Aren’t Just Whining – the “Skills Gap” Is Real
Every year, the Manpower Group, a human resources consultancy, conducts a worldwide “Talent Shortage Survey.” Last year, 35% of 38,000 employers reported difficulty filling jobs due to lack of available talent; in the U.S., 39% of employers did. But the idea of a “skills gap” as identified in this and other surveys has been widely criticized. Peter Cappelli asks whether these studies are just a sign of “employer whining;” Paul Krugman calls the skills gap a “zombie idea” that “that should have been killed by evidence, but refuses to die.” The New York Times asserts that it is “mostly a corporate fiction, based in part on self-interest and a misreading of government data.” According to the Times, the survey responses are an effort by executives to get “the government to take on more of the costs of training workers.”
Really? A worldwide scheme by thousands of business managers to manipulate public opinion seems far-fetched. Perhaps the simpler explanation is the better one: many employers might actually have difficulty hiring skilled workers. The critics cite economic evidence to argue that there are no major shortages of skilled workers. But a closer look shows that their evidence is mostly irrelevant. The issue is confusing because the skills required to work with new technologies are hard to measure. They are even harder to manage. Understanding this controversy sheds some light on what employers and government need to do to deal with a very real problem.
This issue has become controversial because people mean different things by “skills gap.” Some public officials have sought to blame persistent unemployment on skill shortages. I am not suggesting any major link between the supply of skilled workers and today’s unemployment; there is little evidence to support such an interpretation. Indeed, employers reported difficulty hiring skilled workers before the recession. This illustrates one source of confusion in the debate over the existence of a skills gap: distinguishing between the short and long term. Today’s unemployment is largely a cyclical matter, caused by the recession and best addressed by macroeconomic policy. Yet although skills are not a major contributor to today’s unemployment, the longer-term issue of worker skills is important both for managers and for policy.
Nor is the skills gap primarily a problem of schooling. Peter Cappelli reviews the evidence to conclude that there are not major shortages of workers with basic reading and math skills or of workers with engineering and technical training; if anything, too many workers may be overeducated. Nevertheless, employers still have real difficulties hiring workers with the skills to deal with new technologies.
Why are skills sometimes hard to measure and to manage? Because new technologies frequently require specific new skills that schools don’t teach and that labor markets don’t supply. Since information technologies have radically changed much work over the last couple of decades, employers have had persistent difficulty finding workers who can make the most of these new technologies.
Consider, for example, graphic designers. Until recently, almost all graphic designers designed for print. Then came the Internet and demand grew for web designers. Then came smartphones and demand grew for mobile designers. Designers had to keep up with new technologies and new standards that are still changing rapidly. A few years ago they needed to know Flash; now they need to know HTML5 instead. New specialties emerged such as user-interaction specialists and information architects. At the same time, business models in publishing have changed rapidly.
Graphic arts schools have had difficulty keeping up. Much of what they teach becomes obsolete quickly and most are still oriented to print design in any case. Instead, designers have to learn on the job, so experience matters. But employers can’t easily evaluate prospective new hires just based on years of experience. Not every designer can learn well on the job and often what they learn might be specific to their particular employer.
The labor market for web and mobile designers faces a kind of Catch-22: without certified standard skills, learning on the job matters but employers have a hard time knowing whom to hire and whose experience is valuable; and employees have limited incentives to put time and effort into learning on the job if they are uncertain about the future prospects of the particular version of technology their employer uses. Workers will more likely invest when standardized skills promise them a secure career path with reliably good wages in the future.
Under these conditions, employers do, have a hard time finding workers with the latest design skills. When new technologies come into play, simple textbook notions about skills can be misleading for both managers and economists.
For one thing, education does not measure technical skills. A graphic designer with a bachelor’s degree does not necessarily have the skills to work on a web development team. Some economists argue that there is no shortage of employees with the basic skills in reading, writing and math to meet the requirements of today’s jobs. But those aren’t the skills in short supply.
Other critics look at wages for evidence. Times editors tell us “If a business really needed workers, it would pay up.” Gary Burtless at the Brookings Institution puts it more bluntly: “Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job” by offering better pay or benefits. Since Burtless finds that the median wage is not increasing, he concludes that there is no shortage of skilled workers.
But that’s not quite right. The wages of the median worker tell us only that the skills of the median worker aren’t in short supply; other workers could still have skills in high demand. Technology doesn’t make all workers’ skills more valuable; some skills become valuable, but others go obsolete. Wages should only go up for those particular groups of workers who have highly demanded skills. Some economists observe wages in major occupational groups or by state or metropolitan area to conclude that there are no major skill shortages. But these broad categories don’t correspond to worker skills either, so this evidence is also not compelling.
To the contrary, there is evidence that select groups of workers have been had sustained wage growth, implying persistent skill shortages. Some specific occupations such as nursing do show sustained wage growth and employment growth over a couple decades. And there is more general evidence of rising pay for skills within many occupations. Because many new skills are learned on the job, not all workers within an occupation acquire them. For example, the average designer, who typically does print design, does not have good web and mobile platform skills. Not surprisingly, the wages of the average designer have not gone up. However, those designers who have acquired the critical skills, often by teaching themselves on the job, command six figure salaries or $90 to $100 per hour rates as freelancers. The wages of the top 10% of designers have risen strongly; the wages of the average designer have not. There is a shortage of skilled designers but it can only be seen in the wages of those designers who have managed to master new technologies.
This trend is more general. We see it in the high pay that software developers in Silicon Valley receive for their specialized skills. And we see it throughout the workforce. Research shows that since the 1980s, the wages of the top 10% of workers has risen sharply relative to the median wage earner after controlling for observable characteristics such as education and experience. Some workers have indeed benefited from skills that are apparently in short supply; it’s just that these skills are not captured by the crude statistical categories that economists have at hand.
And these skills appear to be related to new technology, in particular, to information technologies. The chart shows how the wages of the 90th percentile increased relative to the wages of the 50th percentile in different groups of occupations. The occupational groups are organized in order of declining computer use and the changes are measured from 1982 to 2012. Occupations affected by office computing and the Internet (69% of these workers use computers) and healthcare (55% of these workers use computers) show the greatest relative wage growth for the 90th percentile. Millions of workers within these occupations appear to have valuable specialized skills that are in short supply and have seen their wages grow dramatically.
This evidence shows that we should not be too quick to discard employer claims about hiring skilled talent. Most managers don’t need remedial Econ 101; the overly simple models of Econ 101 just don’t tell us much about real world skills and technology. The evidence highlights instead just how difficult it is to measure worker skills, especially those relating to new technology.
What is hard to measure is often hard to manage. Employers using new technologies need to base hiring decisions not just on education, but also on the non-cognitive skills that allow some people to excel at learning on the job; they need to design pay structures to retain workers who do learn, yet not to encumber employee mobility and knowledge sharing, which are often key to informal learning; and they need to design business models that enable workers to learn effectively on the job (see this example). Policy makers also need to think differently about skills, encouraging, for example, industry certification programs for new skills and partnerships between community colleges and local employers.
Although it is difficult for workers and employers to develop these new skills, this difficulty creates opportunity. Those workers who acquire the latest skills earn good pay; those employers who hire the right workers and train them well can realize the competitive advantages that come with new technologies.



Why Women Don’t Apply for Jobs Unless They’re 100% Qualified
You’ve probably heard the following statistic: Men apply for a job when they meet only 60% of the qualifications, but women apply only if they meet 100% of them.
The finding comes from a Hewlett Packard internal report, and has been quoted in Lean In, The Confidence Code and dozens of articles. It’s usually invoked as evidence that women need more confidence. As one Forbes article put it, “Men are confident about their ability at 60%, but women don’t feel confident until they’ve checked off each item on the list.” The advice: women need to have more faith in themselves.
I was skeptical, because the times I had decided not to apply for a job because I didn’t meet all the qualifications, faith myself wasn’t exactly the issue. I suspected I wasn’t alone.
So I surveyed over a thousand men and women, predominantly American professionals, and asked them, “If you decided not to apply for a job because you didn’t meet all the qualifications, why didn’t you apply?”
According to the self-report of the respondents, the barrier to applying was not lack of confidence. In fact, for both men and women, “I didn’t think I could do the job well” was the least common of all the responses. Only about 10% of women and 12% of men indicated that this was their top reason for not applying.
Men and women also gave the same most common reason for not applying, and it was by far the most popular, twice as common as any of the others, with 41% of women and 46% of men indicating it was their top reason: “I didn’t think they would hire me since I didn’t meet the qualifications, and I didn’t want to waste my time and energy.”
In other words, people who weren’t applying believed they needed the qualifications not to do the job well, but to be hired in the first place. They thought that the required qualifications were…well, required qualifications. They didn’t see the hiring process as one where advocacy, relationships, or a creative approach to framing one’s expertise could overcome not having the skills and experiences outlined in the job qualifications.
What held them back from applying was not a mistaken perception about themselves, but a mistaken perception about the hiring process.
This is critical, because it suggests that if the HP finding speaks to a larger trend, women don’t need to try and find that elusive quality, “confidence,” they just need better information about how hiring processes really work.
This is why, I think, the Hewlett Packard report finding is so often quoted, so eagerly shared amongst women, and so helpful. For those women who have not been applying for jobs because they believe the stated qualifications must be met, the statistic is a wake-up call that not everyone is playing the game that way. When those women know others are giving it a shot even when they don’t meet the job criteria, they feel free to do the same.
Another 22% of women indicated their top reason was, “I didn’t think they would hire me since I didn’t meet the qualifications and I didn’t want to put myself out there if I was likely to fail.” These women also believed the on-paper “rules” about who the job was for, but for them, the cost of applying was the risk of failure – rather than the wasted time and energy. Notably, only 13% of men cited not wanting to try and fail as their top reason. Women may be wise to be more concerned with potential failure; there is some evidence that women’s failures are remembered longer than men’s. But that kind of bias may lead us to become too afraid of failure—avoiding it more than is needed, and in ways that don’t serve our career goals. The gender differences here suggest we need to expand the burgeoning conversation about women’s relationship with failure, and explore how bias, stereotype threat, the dearth of women leaders, and girls’ greater success in school all may contribute to our greater avoidance of failure.
There was a sizable gender difference in the responses for one other reason: 15% of women indicated the top reason they didn’t apply was because “I was following the guidelines about who should apply.” Only 8% of men indicated this as their top answer. Unsurprisingly, given how much girls are socialized to follow the rules, a habit of “following the guidelines” was a more significant barrier to applying for women than men.
All three of these barriers, which together account for 78% of women’s reasons for not applying, have to do with believing that the job qualifications are real requirements, and seeing the hiring process as more by-the-book and true to the on paper guidelines than it really is. It makes perfect sense that women take written job qualifications more seriously than men, for several reasons:
First, it’s likely that due to bias in some work environments, women do need to meet more of the qualifications to be hired than do their male counterparts. For instance, a McKinsey report found that men are often hired or promoted based on their potential, women for their experience and track record. If women have watched that occur in their workplaces, it makes perfect sense they’d be less likely to apply for a job for which they didn’t meet the qualifications.
Second, girls are strongly socialized to follow the rules and in school are rewarded, again and again, for doing so. In part, girls’ greater success in school (relative to boys) arguably can be attributed to their better rule following. Then in their careers, that rule-following habit has real costs, including when it comes to adhering to the guidelines about “who should apply.”
Third, certifications and degrees have historically played a different role for women than for men. That history can, I think, lead women to see the workplace as more orderly and meritocratic than it really is. As a result we may overestimate the importance of our formal training and qualifications, and underutilize advocacy and networking.
When I went into the work world as a young twenty-something, I was constantly surprised by how often, it seemed, the emperor had no clothes. Major decisions were made and resources were allocated based not on good data or thoughtful reflection, but based on who had built the right relationships and had the chutzpah to propose big plans.
It took me a while to understand that the habits of diligent preparation and doing quality work that I’d learned in school were not the only—or even primary—ingredients I needed to become visible and successful within my organization.
When it comes to applying for jobs, women need to do the same. Of course, it can’t hurt to believe more in ourselves. But in this case, it’s more important that we believe less in what appear to be the rules.



Different Kinds of Cuteness Affect Us in Different Ways
At an on-campus taste test, research participants who used a cute scoop designed to look like a smiling adult female served themselves about 30% more ice cream than those who used a plain scoop, say Gergana Y. Nenkov of Boston College and Maura L. Scott of Florida State University. This and other experiments demonstrate that exposure to cute, whimsical images increases consumers’ indulgent consumption, as long as the particular form of cuteness doesn’t stimulate thoughts of babies; past research has shown that images of babies prompt careful, caretaking behavior.



Why Saving Work for Tomorrow Doesn’t Work
Do you frequently tell yourself that you’ll do better “next time” and then don’t change when the time comes? Do you often decide to do something “later” only to find that it never gets done?
If you answered “yes” to either one of these questions, you’re probably ignoring the fact that your behavior today is a strong indicator of your behavior tomorrow.
You’re not alone. In The Willpower Instinct, Kelly McGonigal shares how, in a research study, participants were much less likely to exert willpower in making healthy choices when they thought they would have another opportunity the following week. Given the option of a fat-free yogurt versus a Mrs. Field’s cookie, 83% of those who thought they’d have another opportunity the following week chose the cookie. In addition, 67% thought they would pick yogurt the next time, but only 36% made a different choice. Meanwhile, only 57% of the people who saw this as their only chance indulged.
The same pattern of overoptimism about the future held true in a study about people predicting how much they would exercise in the future. When asked to predict their exercise realistically — and even faced with cold, hard data about their previous exercise patterns — individuals were still overly optimistic that “tomorrow would be different.”
Eating and exercise habits are all well and good, but as an expert in effective time investment, I’ve seen too many individuals procrastinate at work because they think, “I’ll get a lot done later.” Unfortunately, banking on future time rarely aligns with productive results. This mindset leads to unconscious self-sabotage because individuals are not taking advantage of the opportunity to get tasks done right now, and when later comes, they find themselves feeling guilty, burned out, and frustrated. They fall back on their habits to put work off, and it doesn’t get accomplished.
This pattern of behavior appears on the job when the only thing you accomplish during the day is answering email because you assume you’ll work better later when no one else is the office. But after everyone’s left at the end of the day, you’re too tired to think straight and just go home without getting anything done. Or it shows up when you choose to not make any progress on a project in small windows of time available because you’re waiting for an open day to knock it out all at once. That day never comes, leaving you scrambling at the last minute. Or it can spring up when you say “yes” to every meeting invite and leave no time to do actual work. Then you wonder why you feel like you’re always frantically working and never have time to relax.
Unless you make a conscious effort to change your behavior, poor time management today will only lead to poor time management tomorrow. Consider these two approaches to dramatically increase your productivity.
Eliminate future options. If you have a tendency, like many overwhelmed individuals, to tell yourself that that you’ll get your important work done later — maybe at night or on the weekend — you increase your chance of procrastination during the day. In truth, you can find it difficult to efficiently get things done later because you feel tired and resentful of the fact that you never have any guilt-free downtime. To overcome this psychological loophole, you need to eliminate the option to do something later.
First, challenge yourself to find specified times during your workday to complete your commitments. Look at your project list and estimate approximately how long it will take you to get certain items done. For example, if you have a presentation at the end of the month, determine how long it will take you to gather the information, put together the presentation, review it with your team, and run through it. Then assign specific times in your schedule between now and the presentation for you to complete each piece. This approach of fusing your to-do list with your calendar will help you realize that if you don’t move ahead on key projects, you will run out of time. There’s no option to simply do the work tomorrow because tomorrow has a new set of tasks assigned to it.
In addition, eliminate free time after hours. If you see an open window on your calendar, you’ll be tempted to put off work, knowing there’s an opportunity later — even if that cuts into personal time. Instead, fill that time with personal commitments. This could mean going out to dinner with a friend, spending the evening at your kids’ soccer game, going to the gym, or moving ahead a side project. By determining what you want to do outside of the office, you motivate yourself to make the best use of your time during the day so that you don’t need to cancel your evening commitments.
Reduce variability in your schedule. If you justify surfing the Internet most of the day because you tell yourself that you’ll work nonstop later, you’re setting yourself up for frustration. When you do attempt to tackle that work, you’ll either feel so guilty about your lack of productivity that it will distract you from the task at hand, or you’ll push yourself so hard that you’ll burn out.
Fortunately, there’s a way to outsmart your mental tricks. Studies done by behavioral economist Howard Rachlin show that smokers told to reduce variability in their smoking behavior — to smoke the same amount of cigarettes each day — gradually decreased their overall smoking, even though they were not told to smoke less. By focusing on the fact that if they smoked a pack of cigarettes today, they would need to smoke a pack the next day and the next, they found smoking that pack less appealing.
You can apply the same principle to motivate effective time management. Instead of telling yourself, “It’s OK if I surf the Internet for half the day because I’ll get so much done later this week,” ask yourself this question: “Do I want to surf the Internet for half the day for the rest of my life?” Your answer will probably be, “Of course not. That would be a waste of time.” You can then decide to dedicate that chunk of time to something more productive on a regular basis. Choosing to work the same amount each day with little variation on your schedule takes away the mental loophole that allows you to escape from getting things done now.
Using the present moment wisely instead of banking on time in the future can help you stay committed to your goals. If you have a project at work you’ve avoided for months or some languishing expense reports to file, think about how you can apply these strategies to move forward on those items today.



August 22, 2014
How Watson Changed IBM
Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples, keeping up with all of the knowledge coming out of human genome research, or keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.
So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.”
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.”
More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation. The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.



Is the Future of Shopping No Shopping at All?
In a survey on what he terms "predictive shopping," Harvard Law professor Cass Sustein found that 41% of people would "enroll in a program in which the seller sent you books that it knew you would purchase, and billed your credit card." That number went down to 29% if the company didn't ask for your consent first.
But what if the products and services were different, like a sensor that knew you were almost out of dish detergent? Without consent, were people willing to have a company charge their account and send them more detergent? Most people (61%) weren't. But the results were a bit more interesting when Sustein did a similar survey among university students. While most still weren't into being charged automatically for books they might like, "69% approved of automatic purchases by the home monitor, even without consent." The professor posits that "among younger people, enthusiasm is growing for predictive shopping, especially for routine goods where shopping is an annoyance and a distraction."
It's Not the BusWhich Mode of Travel Provides the Happiest Commute?CityLab
While the results from a recent McGill University study aren't especially surprising — and consist of a McGill-specific survey sample — they do add credence to what many people already know in their commuting heart of hearts: That walking, biking, or taking a commuter train to work is much more satisfying than driving or taking the subway or bus. My significant other, for example, loves biking to work because it's both enjoyable and on his own timeline — he pretty much always knows when he's going to arrive at work, which diminishes his extreme dislike of idling in traffic for no apparent reason (I don't mind it as much because of my interest in singing loudly, and poorly, in the car). And a long train ride can allow for reading or doing work, making the time more productive.
But there were some surprises: Bus riders and cyclists — both of whom travel about 22 minutes to work — had very different levels of satisfaction. So, time spent commuting isn't necessarily a consistent predictor of happiness. And, in the end, "people expressed more happiness with their commute when the mode they took was the mode they wanted to take."
Step Up, Employers It's Not a Skills Gap: U.S. Workers are Overqualified, UndertrainedBusinessweek
Add this research from Peter Cappelli to the ongoing debate about the skills gap. According to the Wharton professor, and explained by Businessweek's Matthew Philips, much of the problem lies in how we do (or don't) train employees. Back in 1979, for example, young American workers received 2.5 weeks of training per year; by 1991, only 17% of employees said they received any formal training within the year. And by 2011, a mere 21% of Americans had received any training within the past five years. The prevailing argument is that companies no longer train their employees because it's a bad investment (top talent will end up leaving anyway), and because they're relying on internships to teach young workers. But Cappelli says that "the fear of having a competitor reap the rewards of your investment are overblown" — to the detriment of both companies and workers. In the end, says Philips, "the problem may not be the skills workers ostensibly lack. It may be that employers' expectations are out of whack."
YesCan a Robot Be Too Nice?Boston Globe
As robots and algorithms become more and more central to pretty much everything we do, the question of how humans and robots interact becomes more and more important (I mean, just look at the robot bellhop). Leon Neyfakh does a great job of rounding up all the ways researchers are trying to nail down what types of robot personalities people respond to, and in what circumstances. When it comes to robot nurses, for example, people prefer an outgoing and assertive personality. However, people were not at all confident in the protective abilities of extraverted security guard robots. So the future is looking more and more like a place where "it's not enough for a machine to have an agreeable personality — it needs to have the right personality." And as researchers aim to figure out what these personalities are and how they might change depending on the circumstances (yes, it's conceivable that one robot personality could migrate between all the devices you use throughout the day), Neyfakh observes what always seems to be the bottom line when we talk about robots and their human pals: "What the ideal machine personalities turn out to be may expose needs and prejudices we're not even aware we have."
From Sentiment to SuccessWhy Uber Just Hired Obama's Campaign GuruWired
Uber's great and all, except for one tiny problem: A lot of countries around the world think its business model is illegal. It's through this lens that the company's recent hire makes brilliant sense: David Plouffe, President Obama's 2008 campaign manager. Plouffe, as Wired's Marcus Wohlsen writes, was instrumental in "turning sentiment into success" six years ago. Plouffe engineered this through data — collecting it among potential voters and then micro-targeting based on the intelligence the campaign gathered. Uber, of course, gathers similar real-time data – data that could be used in a grassroots sort of way: Uber devotees who may not be aware of the company's regulatory problems can be recruited with specific messages to sign petitions and lobby their government representatives. Wohlsen puts this challenge nicely: "To survive, Uber is now about more than rides. It's about turning out the base."
BONUS BITS You Aren't What You Wear
Yoga Poseurs: Athletic Gear Soars, Outpacing Sport Itself (Wall Street Journal)
This Pair of Bionic Pants Is a Chair That You Wear (Gizmodo)
Oh, This Bracelet? It's Just My Wearable Device Charger (Mashable)



Marina Gorbis's Blog
- Marina Gorbis's profile
- 3 followers
