Marina Gorbis's Blog, page 1367

September 3, 2014

Learn from Your Analytics Failures

By far, the safest prediction about the business future of predictive analytics is that more thought and effort will go into prediction than analytics. That’s bad news and worse management. Grasping the analytic “hows” and “whys” matters more than the promise of prediction.


In the good old days, of course, predictions were called forecasts and stodgy statisticians would torture their time series and/or molest multivariate analyses to get them. Today, brave new data scientists discipline k-means clusters and random graphs to proffer their predictions. Did I mention they have petabytes more data to play with and process?


While the computational resources and techniques for prediction may be novel and astonishingly powerful, many of the human problems and organizational pathologies appear depressingly familiar. The prediction imperative frequently narrows focus rather than broadens perception.  “Predicting the future” can—in the spirit of Dan Ariely’s Predictably Irrational—unfortunately bring out the worst cognitive impulses in otherwise smart people. The most enduring impact of predictive analytics, I’ve observed, comes less from quantitatively improving the quality of prediction than from dramatically changing how organizations think about problems and opportunities.


Ironically, the greatest value from predictive analytics typically comes more from their unexpected failures than their anticipated success. In other words, the real influence and insight come from learning exactly how and why your predictions failed. Why? Because it means the assumptions, the data, the model and/or the analyses were wrong in some meaningfully measurable way. The problem—and pathology—is that too many organizations don’t know how to learn from analytic failure. They desperately want to make the prediction better instead of better understanding the real business challenges their predictive analytics address. Prediction foolishly becomes the desired destination instead of the introspective journey.


In pre-Big Data days, for example, a hotel chain used some pretty sophisticated mathematics, data mining, and time series analysis to coordinate its yield management pricing and promotion efforts. This ultimately required greater centralization and limiting local operator flexibility and discretion. The forecasting models—which were marvels—mapped out revenues and margins by property and room type. The projections worked fine for about a third of the hotels but were wildly, destructively off for another third. The forensics took weeks; the data were fine. Were competing hotels running unusual promotions that screwed up the model? Nope. For the most part, local managers followed the yield management rules.


Almost five months later, after the year’s financials were totally blown and HQ’s credibility shot, the most likely explanation materialized: The modeling group—the data scientists of the day—had priced against the hotel group’s peer competitors. They hadn’t weighted discount hotels into either pricing or room availability. For roughly a quarter of the properties, the result was both lower average occupancy and lower prices per room.


The modeling group had done everything correctly. Top management’s belief in its brand value and positioning excluded discounters from their competitive landscape. Think this example atypical or anachronistic? I had a meeting last year with another hotel chain that’s now furiously debating whether Airbnb’s impact should be incorporated into their yield management equations.


More recently, a major industrial products company made a huge predictive analytics commitment to preventive maintenance to identify and fix key components before they failed and more effectively allocate the firm’s limited technical services talent. Halfway through the extensive—and expensive—data collection and analytics review, a couple of the repair people observed that, increasingly, many of the subsystems could be instrumented and remotely monitored in real time. In other words, preventive maintenance could be analyzed and managed as part of a networked system. This completely changed the design direction and the business value potential of the initiative. The value emphasis shifted from preventive maintenance to efficiency management with key customers. Again, the predictive focus initially blurred the larger vision of where the real value could be.


When predictive analytics are done right, the analyses aren’t a means to a predictive end; rather, the desired predictions become a means to analytical insight and discovery. We do a better job of analyzing what we really need to analyze and predicting what we really want to predict. Smart organizations want predictive analytic cultures where the analyzed predictions create smarter questions as well as offer statistically meaningful answers. Those cultures quickly and cost-effectively turn predictive failures into analytic successes.


To paraphrase a famous saying in a data science context, the best way to predict the future is to learn from failed predictive analytics.



Predictive Analytics in Practice

An HBR Insight Center




A Predictive Analytics Primer
Nate Silver on Finding a Mentor, Teaching Yourself Statistics, and Not Settling in Your Career
Beware Big Data’s Easy Answers
Who’s Afraid of Data-Driven Management?




 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2014 07:00

Your Company’s Purpose Is Not Its Vision, Mission, or Values

We hear more and more that organizations must have a compelling “purpose” — but what does that mean? Aren’t there already a host of labels out there that describe organizational direction? Do we need yet another?


I think we do, and I’ve pulled together a typology of sorts to help distinguish all these terms from one another.


A vision statement says what the organization wishes to be like in some years’ time. It’s usually drawn up by senior management, in an effort to take the thinking beyond day-to-day activity in a clear, memorable way. For instance, the Swedish company Ericsson (a global provider of communications equipment, software, and services) defines its vision as being “the prime driver in an all-communicating world.”


There’s also the mission, which describes what business the organization is in (and what it isn’t) both now and projecting into the future. Its aim is to provide focus for management and staff. A consulting firm might define its mission by the type of work it does, the clients it caters to, and the level of service it provides. For example: “We’re in the business of providing high-standard assistance on performance assessment to middle to senior managers in medium-to-large firms in the finance industry.”


Values describe the desired culture. As Coca-Cola puts it, they serve as a behavioral compass.  Coke’s values include having the courage to shape a better future, leveraging collective genius, being real, and being accountable and committed.


If values provide the compass, principles give employees a set of directions. The global logistics and mail service company TNT Express illustrates the difference in its use of both terms. TNT United Kingdom, the European market leader, lists “customer care” among nine key principles, describing it as follows: “Always listening to and building first-class relationships with our customers to help us provide excellent standards of service and client satisfaction.” TNT’s Australian branch takes a different approach: Rather than outline detailed principles, it highlights four high-level “core values,” including: “We are passionate about our customers.” Note the lighter touch, the broader stroke.


So how does purpose differ from all the above, which emphasize how the organization should view and conduct itself?


Greg Ellis, former CEO and managing director of REA Group, said his company’s purpose was “to make the property process simple, efficient, and stress free for people buying and selling a property.” This takes outward focus to a whole new level, not just emphasizing the importance of serving customers or understanding their needs but also putting managers and employees in customers’ shoes.  It says, “This is what we’re doing for someone else.” And it’s motivational, because it connects with the heart as well as the head. Indeed, Ellis called it the company’s “philosophical heartbeat.”


For other examples of purpose, look at the financial services company ING  (“Empowering people to stay a step ahead in life and in business”), the Kellogg food company (“Nourishing families so they can flourish and thrive”) and the insurance company IAG (“To help people manage risk and recover from the hardship of unexpected loss”).


If you’re crafting a purpose statement, my advice is this: To inspire your staff to do good work for you, find a way to express the organization’s impact on the lives of customers, clients, students, patients — whomever you’re trying to serve. Make them feel it.




 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2014 06:00

You Perform Better When You’re Competing Against a Rival

A runner can be expected to finish a 5 kilometer race about 25 seconds faster, on average, if a personal rival is also running, according to a study by Gavin J. Kilduff of New York University of 184 races conducted by a running club. Although past research has shown that competition imposed on strangers can be demotivating, Kilduff’s findings suggest that longstanding personal rivalries between similar contestants can boost both motivation and performance.




 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2014 05:30

How to Market Test a New Idea

“So,” the executive sponsor of the new growth effort said. “What do we do now?”


It was the end of a meeting reviewing progress on a promising initiative to bring a new health service to apartment dwellers in crowded, emerging-market cities. A significant portion of customers who had been shown a brochure describing the service had expressed interest in it. But would they actually buy it?  To find out, the company decided to test market the service in three roughly comparable apartment complexes over a 90-day period.


Before the test began, team members working on the idea had built a detailed financial model showing that it could be profitable if they could get 3% of customers in apartment complexes to buy it. In the market test, they decided to offer a one-month free trial, after which people would have the chance to sign up for a full year of the service. They guessed that 30% of customers in each complex would accept the free trial and that 10% of that group would convert to full-year subscribers.


They ran the test, and as always, learned a tremendous amount about the intricacies of positioning a new service and the complexities of actually delivering it. They ended the three months much more confident that they could successfully execute their idea, with modifications of course.


But then they started studying the data, which roughly looked as follows:


Buy Your Offering chart


Overall trial levels were lower than expected (except in Complex 2); conversion of trials to full year subscribers were a smidge above expectations (and significantly higher in Complex 3); but average penetration levels fell beneath the magic 3% threshold.


What were the data saying? On the one hand, the trial fell short of its overall targets. That might suggest stopping the project or, perhaps, making significant changes to it. On the other hand, it only fell five customers short of targets. So, maybe the test just needed to be run again. Or maybe the data even suggest the team should move forward more rapidly. After all, if you could combine the high rate of trial in Complex 2 with the high conversion rate of Complex 3…


It’s very rare that innovation decisions are black and white. Sometimes the drug doesn’t work or the regulator simply says no, and there’s obviously no point in moving forward. Occasionally results are so overwhelmingly positive that it doesn’t take too much thought to say full steam ahead. But most times, you can make convincing arguments for any number of next steps:  keep moving forward, make adjustments based on the data, or stop because results weren’t what you expected.


The executive sponsor felt the frustration that is common to companies that are used to the certainty that tends to characterize operational decisions, where historical experience has created robust decision rules that remove almost all need for debate and discussion.


Still, that doesn’t mean that executives have to make decisions blind. Start, as this team did, by properly designing experiments. Formulate a hypothesis to be tested. Determine specific objectives for the test. Make a prediction, even if it is just a wild guess, as to what should happen. Then execute in a way that enables you to accurately measure your prediction.


Then involve a dispassionate outsider in the process, ideally one who has learned through experience how to handle decisions with imperfect information. So-called devil’s advocates have a bad reputation among innovators because they seem to say no just to say no. But someone who helps you honestly see weak spots to which you might be blind plays a very helpful role in making good decisions


Avoid considering an idea in isolation. In the absence of choice, you will almost always be able to develop a compelling argument about why to proceed with an innovation project. So instead of asking whether you should invest in a specific project, ask if you are more excited about investing in Project X versus other alternatives in your innovation portfolio.


And finally, ensure there is some kind of constraint forcing a decision. My favorite constraint is time. If you force decisions in what seems like artificially short time period, you will imbue your team with a strong bias to action, which is valuable because the best learning comes from getting as close to the market as possible. Remember, one of your options is to run another round of experiments (informed of course by what you’ve learned to date), so a calendar constraint on each experiment doesn’t force you to rush to judgment prematurely.


That’s in fact what the sponsor did in this case — decided to run another experiment, after first considering redirecting resources to other ideas the company was working on. The team conducted another three-month market test, with modifications based on what was learned in the first run. The numbers moved up, so the company decided to take the next step toward aggressive commercialization.


This is hard stuff but a vital discipline to develop or else your innovation pipeline will get bogged down with initiatives stuck in a holding pattern. If you don’t make firm decisions at some point, you have made the decision to fail by default.




 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2014 05:00

September 2, 2014

How to Manage Scheduling Software Fairly

Starbucks workers recently scored a point against the machine. After a lengthy New York Times story, the company decided to adjust some of their controversial scheduling practices, eliminating “clopening” — when workers are required to close at night and re-open in the morning — and requiring at least a week’s notice of upcoming schedules.


In this case, “the machine” refers to a real machine: the highly sophisticated automated software Starbucks uses to schedule its 130,000 baristas, sometimes giving them less than a few day’s notice about their schedules in order to “optimize” its workforce.


Starbucks is just one of many companies using this type of technology, and it’s not hard to understand why. Until recently, determining who works when involved store managers manually slotting each employee into shifts on paper. Automation not only frees store managers to focus on customers, but can take into account much more data than a person can remember —historical customer patterns, weather, experience at neighborhood stores — so workers spend less time either with nothing to do or being completely overwhelmed with long lines and unhappy customers.


As the argument goes, that’s good for workers, who don’t want to be bored or overwhelmed, and it’s good for retailers, whose biggest cost and revenue driver is typically labor.


Our collective research has also shown that retailers really do struggle with scheduling. In a study of 41 stores of a women’s apparel retailer, for example, Saravanan Kesavan and his co-authors found that all of the stores were understaffed significantly during the peak periods of the day, while they were significantly overstaffed during the rest of the hours. The authors estimated that the retailer was losing about 9% of sales and 7% of profits due to this mismatch.


So why not implement just-in-time, software-driven staffing across the board? The problem — and one that Starbucks was forced to face first-hand— is that while scheduling software may seem “like magic,” as one of the major software vendors in the Times article put it, it’s actually not. Starbucks’ experience is common among a number of retailers who have taken their passion for engineering and optimizing schedules too far. As soon as a computer is scheduling your people at 15-minute increments to match the peaks and valleys of customer demand, employees’ desire to live a normal, predictable life becomes a barrier to profitability.


Three additional realities get in the way:


Perfect forecasts don’t exist. To produce an optimal labor schedule, the scheduling software must forecast customer patterns accurately—if you want to schedule labor at 15-minute increments, you must also understand demand at 15-minute increments. The dirty little secret is that even the most advanced scheduling software, incorporating every bell and whistle, tends to be wrong at least as often as it is right when the time intervals are short.


For example, the charts below show the variation in customer arrival pattern for a retail store on Saturdays, and then a breakdown of the same store between noon and 1 p.m. on Saturdays. The coefficient of variation, a measure of how variable the traffic is, increases from 10% to 31% as we go from daily to hourly data. This implies that it would be lot harder to predict hourly traffic compared to daily traffic, and it’s guaranteed to be lot more erroneous when predictions are generated at the 15-minute interval.


Customer Arrival Patterns Chart


Because of this, good scheduling software tends to serve the “normal” customers well (those buying cappuccinos on the way to work every morning), but it may be at least as important to serve the abnormal ones (like a person who buys 20 lattes for a single meeting).


Tracking everything is unreliable. The response to unpredictable customer behavior has been, “well, let’s just track everything more carefully.” But tracking doesn’t always yield better predictions. For one, there’s inherent variability, as shown in the arrival pattern of customers to 35 stores of a retail chain over a period of a day. The differences between the top and bottom lines – in other words, the differences between two stores – don’t offer much in the way of insights.


Hourly Variation Chart


Increased tracking can also create distortions in the data and unforeseen employee reactions, as Ethan Bernstein has found in his research. At a factory in Southern China, for example, executives thought that watching workers would help managers increase productivity and lower costs. It turns out that the opposite was true: Employees were more likely to be innovative when they weren’t being monitored, and production slowed when all eyes were on them.


A lot of flexibility isn’t necessarily a good thing. Saravanan Kesavan, Brad Staats, and their co-author have shown that temporary and part-time workers can help improve sales and profitability up to a point. For example, store profits were found to increase when the number of part-time and temporary workers was increased from zero to 4-5 for every 10 full-time workers. But beyond that point, any further increase in the number of those workers decreased store profits as the intangible costs of lower motivation and greater coordination dominated the benefits of scheduling workers to meet unpredictable demand.


So what should retailers do? To start, they can learn a lot from the considerable variation in how these systems have been implemented across companies. We see them as falling somewhere in the spectrum of purely creating value for management and purely creating value for workers.


The systems designed solely to create value for management fail to take into account so many unpredictable (and human) variables that they often result in public failures. In 2011, over 4,000 workers from Macy’s threatened to go on strike, in part because employees felt that the management was pushing for an online scheduling system that would make their schedules unpredictable. Walmart has often been accused by working mothers that the unpredictable schedules and low wages hurt their lives. The story of Jannette Navarro, the Starbucks worker profiled in the Times, absolutely rings true.


At the same time, strictly human-centric approaches can be problematic: If store relies on a remote scheduler, that’s a person who isn’t spending time with customers or employees. Like the scheduler Ms. Navarro had to beg in order to get 40 hours, he or she may also exert unfair power over workers.


Some systems, however, are implemented to create value for workers while simultaneously taking advantage of software’s benefits. If a store has a scheduling system that is accessible to, and modifiable by a team of corporate managers, store managers, and workers, it can balance human needs, customer needs, and company needs in a fair, transparent, and more informed way. These implementations can make ordinary workers, even part-time workers, better at managing themselves.


In other words, the best kinds of scheduling systems involve both managers and software, not for the purpose of more tightly controlling workers, but to inform them on how optimal schedules stack up against predicted forecasts. For example, what if store-level employees could edit the schedules produced by the machine, but were held accountable for the ultimate effectiveness of them?


This is the exact approach taken by Belk, the largest family owned and operated department store in the United States. Before their tool was implemented, scheduling was performed by store managers and schedulers who balanced profits and worker needs to create a “fair” schedule that worked for everyone — incorporating preferences and an equal amount of weekend work. These types of nuances, with lots of variation, were too complicated for any workforce software to take into account.


So when Belk implemented their new tool, employees saw its failure to create fair schedules as “bugs” in the software.


But unlike other retailers who take an iron hand to push compliance with a new system, this retailer let the team at the store “edit” the system to “fix” the “bugs” — essentially, they allowed workers and their supervisors to ensure they had the days or hours off they needed, more than a week in advance, by overriding the system.


At the same time, a central workforce team at corporate was tasked with analyzing a sampling of edits to understand their reasons and benefits. Some edits were, of course productive; others involved resistance to change or misunderstandings and miscommunications. Belk then worked with its store managers through weekly meetings to encourage compliance in areas where the scheduling system made sense, and at the same time provided feedback to the scheduling company to update its software where the schedules did not make sense.


Belk now revises about 50% of its scheduling based on this new approach, a healthy balance between the efficiency you get from a machine and the intelligence you get from human intervention. And the company reported a 2% increase in gross profits several months after implementing the override system.


Ultimately, the success of scheduling systems depends on whether they serve as tools for or against the workers. In many ways, data-driven scheduling software is attractive to retailers because it gives them unprecedented transparency. But the ultimate success of these systems depends on this same transparency being available to employees as well. When management takes enabling its workers seriously — when these tools become an experiment in worker learning rather than top-down compliance — results can far exceed even the most magical predictions that scheduling software initially promised.




 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2014 08:00

A Predictive Analytics Primer

No one has the ability to capture and analyze data from the future. However, there is a way to predict the future using data from the past. It’s called predictive analytics, and organizations do it every day.


Has your company, for example, developed a customer lifetime value (CLTV) measure? That’s using predictive analytics to determine how much a customer will buy from the company over time. Do you have a “next best offer” or product recommendation capability? That’s an analytical prediction of the product or service that your customer is most likely to buy next. Have you made a forecast of next quarter’s sales? Used digital marketing models to determine what ad to place on what publisher’s site? All of these are forms of predictive analytics.


Predictive analytics are gaining in popularity, but what do you—a manager, not an analyst—really need to know in order to interpret results and make better decisions?  How do your data scientists do what they do?  By understanding a few basics, you will feel more comfortable working with and communicating with others in your organization about the results and recommendations from predictive analytics.  The quantitative analysis isn’t magic—but it is normally done with a lot of past data, a little statistical wizardry, and some important assumptions. Let’s talk about each of these.


The Data:  Lack of good data is the most common barrier to organizations seeking to employ predictive analytics. To make predictions about what customers will buy in the future, for example, you need to have good data on who they are buying (which may require a loyalty program, or at least a lot of analysis of their credit cards), what they have bought in the past, the attributes of those products (attribute-based predictions are often more accurate than the “people who buy this also buy this” type of model), and perhaps some demographic attributes of the customer (age, gender, residential location, socioeconomic status, etc.). If you have multiple channels or customer touchpoints, you need to make sure that they capture data on customer purchases in the same way your previous channels did.


All in all, it’s a fairly tough job to create a single customer data warehouse with unique customer IDs on everyone, and all past purchases customers have made through all channels. If you’ve already done that, you’ve got an incredible asset for predictive customer analytics.


The Statistics:  Regression analysis in its various forms is the primary tool that organizations use for predictive analytics. It works like this in general: An analyst hypothesizes that a set of independent variables (say, gender, income, visits to a website) are statistically correlated with the purchase of a product for a sample of customers. The analyst performs a regression analysis to see just how correlated each variable is; this usually requires some iteration to find the right combination of variables and the best model. Let’s say that the analyst succeeds and finds that each variable in the model is important in explaining the product purchase, and together the variables explain a lot of variation in the product’s sales. Using that regression equation, the analyst can then use the regression coefficients—the degree to which each variable affects the purchase behavior—to create a score predicting the likelihood of the purchase.


Voila! You have created a predictive model for other customers who weren’t in the sample. All you have to do is compute their score, and offer the product to them if their score exceeds a certain level. It’s quite likely that the high scoring customers will want to buy the product—assuming the analyst did the statistical work well and that the data were of good quality.


The Assumptions:  That brings us to the other key factor in any predictive model—the assumptions that underlie it. Every model has them, and it’s important to know what they are and monitor whether they are still true. The big assumption in predictive analytics is that the future will continue to be like the past. As Charles Duhigg describes in his book The Power of Habit, people establish strong patterns of behavior that they usually keep up over time. Sometimes, however, they change those behaviors, and the models that were used to predict them may no longer be valid.


What makes assumptions invalid? The most common reason is time. If your model was created several years ago, it may no longer accurately predict current behavior. The greater the elapsed time, the more likely customer behavior has changed. Some Netflix predictive models, for example, that were created on early Internet users had to be retired because later Internet users were substantially different. The pioneers were more technically-focused and relatively young; later users were essentially everyone.


Another reason a predictive model’s assumptions may no longer be valid is if the analyst didn’t include a key variable in the model, and that variable has changed substantially over time. The great—and scary—example here is the financial crisis of 2008-9, caused largely by invalid models predicting how likely mortgage customers were to repay their loans. The models didn’t include the possibility that housing prices might stop rising, and even that they might fall. When they did start falling, it turned out that the models became poor predictors of mortgage repayment. In essence, the fact that housing prices would always rise was a hidden assumption in the models.


Since faulty or obsolete assumptions can clearly bring down whole banks and even (nearly!) whole economies, it’s pretty important that they be carefully examined. Managers should always ask analysts what the key assumptions are, and what would have to happen for them to no longer be valid. And both managers and analysts should continually monitor the world to see if key factors involved in assumptions might have changed over time.


With these fundamentals in mind, here are a few good questions to ask your analysts:



Can you tell me something about the source of data you used in your analysis?
Are you sure the sample data are representative of the population?
Are there any outliers in your data distribution? How did they affect the results?
What assumptions are behind your analysis?
Are there any conditions that would make your assumptions invalid?

Even with those cautions, it’s still pretty amazing that we can use analytics to predict the future. All we have to do is gather the right data, do the right type of statistical model, and be careful of our assumptions. Analytical predictions may be harder to generate than those by the late-night television soothsayer Carnac the Magnificent, but they are usually considerably more accurate.



Predictive Analytics in Practice

An HBR Insight Center




Nate Silver on Finding a Mentor, Teaching Yourself Statistics, and Not Settling in Your Career
Beware Big Data’s Easy Answers
Who’s Afraid of Data-Driven Management?
When to Act on a Correlation, and When Not To




 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2014 07:00

It’s Never Been More Lucrative to Be a Math-Loving People Person

Parents who spend a good chunk of the week shuttling kids to and from soccer practice or drama club might be comforted by new research that suggests this effort is not in vain – as long as their kids are good at math, too.


A recent paper from UCSB found that the return on being good at math has gone up over the last few decades, as has the return on having high social skills (some combination of leadership, communication, and other interpersonal skills). But, the paper argues, the return on the two skills together has risen even faster.


What does all that have to do with soccer practice? The research compared two groups of white, male U.S. high school seniors – the class of 1972 and the class of 1992 – to see how earnings associated with social and math skills have changed over time. Using two National Center for Education Statistics (NCES) surveys, it looked at senior year math scores on standardized tests, questions about extracurricular participation and leadership roles, and individual earnings seven years after graduating high school. And it corroborated the findings with Census and CPS data.


The analysis found that while math scores, sports, leadership roles, and college education were all associated with higher earnings over the 1979-1999 period, the trend over time in the earnings premium was strongest among those who were both good at math and engaged in high school sports or leadership activities. In other words, it pays to be a sociable math whiz, more so today than thirty years ago.


Some may be skeptical that high school sports participation or club leadership (the study also includes publications and performing arts groups) are accurate indicators of “social skills” – perhaps rightfully so. But these extracurriculars, which typically involve teamwork, communication, and general interaction with others, have long been associated with the development of social skills. (Whether these activities foster these skills or attract already social kids is another question.) And the paper looked at how they tend to affect future careers:


The sports/leadership group is likely to be employed in an occupation requiring higher levels of responsibility for direction, control and planning, even after controlling for high school math scores, psychological measures, and college completion. This is compelling evidence that participation in high school sports or leadership activities – a behavioral indicator of social skills – can be linked to … complex interpersonal skills.


The other justification for using sports and clubs as a proxy for social skills was methodological. In order to measure how the price of social skills has changed over time, comparable metrics were needed. They may not be perfect, but these categories stayed available and consistent over time.


The analysis was restricted to white men for the same reason – their test scores and activity levels remained the same, while a lot of things were changing for other groups. According to the paper, “Math scores were stable across cohorts among white men, but not among black students – and women’s participation in high school roles and activities changed dramatically during these years.”


But the author argues that the findings are still likely generalizable – and help to explain the changing demand in the labor market for different skills.


According to the data, while people focused on the surge in demand for math skills in the ‘80s and ‘90s, there was a concurrent (and underappreciated) increase in demand for both math and social skills. Employment in high-skill occupations increased between 1977 and 2002, but the paper found that, among the groups studied, all of that growth was in jobs that required both analytic and social skills. Employment in jobs requiring just one or the other didn’t increase over time. And this growing importance for “multiskilled” individuals in the labor force can be seen in the higher earnings for those who played sports or led in high school.


thosewithhigh


Why the increasingly valuable relationship between the two skills? Cathy Weinberger, the author, says answering that requires further research. But others have studied how technological innovations affect workforce skill requirements. Weinberger mentioned one study that found that adopting new technologies not only resulted in technical training for workers, but also in training to develop their complex communication skills and teamwork. So this rise in demand for social skills, happening alongside the rise in demand for math skills, could be the result of technological progress.


The data suggests that today’s economy rewards the balance of quantitative and social skills more than ever. That has ramifications for how we educate children – calling into question schools’ heightened focus on standardized testing, as opposed to a broader view of skills development – as well as for our own careers. In an era even more defined by rapid technological innovation, we’re increasingly expected to bring technical savvy and interpersonal know-how to the table. Quantitative reasoning is understandably in high demand, but so too are the skills learned on the sports field.




 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2014 06:00

Beware Consumers’ Assumptions About Your Green Products

In an experiment, people expressed greater intentions to purchase a dish soap when they were told its environmental benefits were an “unintended side effect” of the product-development process, as opposed to a planned feature (5.65 versus 4.77 on a 9-point scale, on average), says a team led by George E. Newman of Yale. Results for other products were similar. The apparent reason: Consumers tend to assume that product enhancements in one dimension — such as environmental impact — come at the expense of performance on other dimensions.




 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2014 05:30

What to Do on Your First Day Back from Vacation

You come back from vacation and start your game of catch-up. This is an especially challenging game if you’re a senior leader. You have hundreds, maybe thousands of emails, a backlog of voicemails, and a to-do list that doubled or tripled in length while you were away. You need to respond to the pent-up needs of clients, managers, colleagues, employees, and vendors. You need to fight fires. You need to regain control.


So you do your best to work through the pileup, handling the most urgent items first, and within a few days, you’re caught up and ready to move forward. You’re back in control. You’ve won.


Or have you?


If that’s your process, you’ve missed a huge leadership opportunity.


What’s the most important role of a leader?  Focus.


As a senior leader, the most valuable thing you can do is to align people behind your business’s most important priorities. If you do that well, the organization will function at peak productivity and have the greatest possible impact.  But that’s not easy to do. It’s hard enough for any one of us to be focused and aligned with our most important objectives. To get an entire organization aligned is crazy hard.


Once in a while, though, you get the perfect opportunity. A time when it’s a little easier, when people are more open, when you can be more clear, when your message will be particularly effective.


Coming back from vacation is one of those opportunities. You’ve gotten some space from the day to day. People haven’t heard from you in a while. Maybe they’ve been on vacation too. They’re waiting. They’re more influenceable than usual.


Don’t squander this opportunity by trying to efficiently wrangle your own inbox and to-do list. Before responding to a single email, consider a few questions:


What’s your top imperative for the organization right now? What will make the most difference to the company’s results? What behaviors do you need to encourage if you are going to meet your objectives? And, perhaps most importantly, what’s less important?


The goal in answering these questions is to choose three to five major things that will make the biggest difference to the organization. Once you’ve identified those things, you should be spending 95% of your energy moving them forward.


How should you do it?



Be very clear about your three to five things. Write them down and choose your words carefully. Read them aloud. Do you feel articulate? Succinct? Clear? Useful? Will they be a helpful guide for people when they’re making decisions and taking actions?
Use them as the lens through which you look at – and filter – every decision, conversation, request, to do, and email you work through. When others make a request, or ask you to make a decision, say them out loud, as in “Given that we’re trying to accomplish X, then it would make sense to do Y.”

Will that email you’re about to respond to reinforce your three to five priorities? Will it create momentum in the right direction? If so, respond in a way that tightens the alignment and clarifies the focus by tying your response as closely as you can to one or more of the three to five things, as you have written them.


If you look at an email and can’t find a clear way to connect it to the organization’s top three to five priorities, then move on to the next email. Don’t be afraid to de-prioritize issues that don’t relate to your top three to five things. This is all about focus, and in order to focus on some things, you need to ignore others.


You’ve got this wonderful opportunity, a rare moment in time when your primary role and hardest task – to focus the organization – becomes a little easier. Don’t lose it.


Coming back from vacation isn’t simply about catching up. It’s about getting ahead.




 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2014 05:00

September 1, 2014

How Beacons Are Changing the Shopping Experience

Beacons are taking the world of mobile by storm. They are low-powered radio transmitters that can send signals to smartphones that enter their immediate vicinity, via Bluetooth Low Energy technology.  In the months and years to come, we’ll see beaconing applied in all kinds of valuable ways.


For marketers in particular, beacons are important because they allow more precise targeting of customers in a locale. A customer approaching a jewelry counter in a department store, for example, can receive a message from a battery-powered beacon installed there, offering information or a promotion that relates specifically to merchandise displayed there. In a different department of the same store, another beacon transmits a different message. Before beacons, marketers could use geofencing technology, so that a message, advertisement, or coupon could be sent to consumers when they were within a certain range of a geofenced area, such as within a one-block radius of a store. However, that technology typically relies on GPS tracking, which only works well outside the store. With beaconing, marketers can lead and direct customers to specific areas and products within a store or mall.


As a point of technical accuracy, the beacon itself does not really contain messaging; rather it sends a unique code that can be read only by certain mobile apps. Thus, the carrier of the smartphone has to have installed an app – and if he or she has not done so, no message will arrive. The choice to opt-out exists at any time. But the key to beaconing’s effectiveness is that the app does not actually have to be running to be awakened by the beacon signal.


Think about it, and you realize that beaconing has been the missing piece in the whole mobile-shopping puzzle. The technology is essentially invisible and can work without the mobile consumer’s having to do anything – usually a major hurdle for any mobile shopping technology. The shopper only has to agree in advance to receive such messages as they shop.


So imagine walking by or into a store and receiving a text message triggered by a beacon at the store entrance. It alerts you that mobile shoppers are eligible for certain deals, which you can receive if you want. Assuming you accept, you begin receiving highly relevant messaging in the form of well-crafted, full-screen images based on what department or aisle you are strolling through at the moment. Here’s an example:


beacon.jpeg


Implementing beaconing is less about installing the actual beacons and much more about rethinking the overall shopping experience they can help shape. Since the best way to imagine the possibilities is through actual, small-scale deployments, this has been how many retailers have spent the past several months, quietly experimenting and learning. Now, many are ready to scale up their initiatives, and beacons are bursting onto the scene in a big way. For various purposes, they are being used by retailers such as Timberland, Kenneth Cole, and Alex and Ani, hoteliers such as Marriott, and a variety of sports stadiums. Here are some details from a handful of companies – each by the way running on a different beacon platform:



Hudson’s Bay Company (HBC). The owner of Lord & Taylor, Hudson’s Bay, and Saks recently became the first major retailer to launch a North American beacon deployment, in its U.S. and Canadian stores. A shopper with the SnipSnap coupon app on an iPhone can receive messages and offers from seven separate in-store, beacon-triggered advertising campaigns. Like some others, the retailer is not relying on its own app for the beacon recognition, but rather is using outside, third-party apps that more people are likely to already have on their phones. (This also allows Lord & Taylor to use the beacon program for new customer acquisition.) The Hudson’s Bay beacon program runs on the advertising platform of Boston-based Swirl.
Hillshire Brands. In what appears to be the first U.S.-wide beacon deployment by a brand, the maker of Ball Park Franks, Jimmy Dean sausage, Sara Lee, and the Hillshire Farm portfolio of products, beaconed grocery shoppers to launch its new American Craft sausage links in the top 10 markets for grocers nationally. Based on an analysis by Hillshire’s agency BPN (part of the IPG Mediabrands global network), there were 6,000 in-store engagements in the first 48 hours of the two-month trial, and purchase intent increased 20 times. Shoppers needed an app such as Epicurious, Zip List, Key Ring, or CheckPoints; this beacon platform is run by InMarket.
Universal Display. This global mannequin company based in London and New York is putting beacons inside mannequins in store windows. Why? To allow passersby to instantly see the details of the outfit the mannequin is wearing – and purchase any of its components right from their phones. The beaconed-mannequins are in the U.K. in the House of Fraser, Hawes & Curtis, Bentalls, and Jaeger, and will soon come to stores in the U.S. Here, the beacon app used is Iconeme, which is also the platform.
Simon Malls. The giant of retail real estate is putting location-based technology into more than 200 of its shopping malls, targeting the complexes’ common areas. For beacon recognition, the mall owner is using its own Simon Malls app, which already contains mall information ranging from maps to dining options. That beacon platform is run by Mobiquity.
Regent Street. London’s mile-long, high-end shopping street has some 140 store entrances, and now has beacons at the entrances of many of them. The beacon app used is the Regent Street app, slated to be promoted on the sides of double-decker buses that run along Regent Street. The app allows shoppers to pre-select the categories that interest them and the ones that don’t, making the messages they receive more relevant to them. That platform is run by Autograph.

That’s a lot of activity to report, but the truth is that it only constitutes a vanguard. Most shoppers have yet to be beaconed; many will encounter the technology before the end of this year. How long will it be until it’s hard to find someone who is not familiar with beaconing? How long till it’s hard to remember a time when the marketing messages you encountered had nothing to do with where you were? I would guess, not long at all.




 •  0 comments  •  flag
Share on Twitter
Published on September 01, 2014 08:00

Marina Gorbis's Blog

Marina Gorbis
Marina Gorbis isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Marina Gorbis's blog with rss.