Marina Gorbis's Blog, page 1360
September 15, 2014
The Marshmallow Test for Grownups
Originally conducted by psychologist Walter Mischel in the late 1960s, the Stanford marshmallow test has become a touchstone of developmental psychology. Children at Stanford’s Bing Nursery School, aged four to six, were placed in a room furnished only with a table and chair. A single treat, selected by the child, was placed on the table. (In addition to marshmallows, the researchers also offered Oreo cookies and pretzel sticks.) Each child was told if they waited for 15 minutes before eating the treat, they would be given a second treat. Then they were left alone in the room.
Follow-up studies with the children later in adolescence showed a correlation between an ability to wait long enough to obtain a second treat and various forms of life success, such as higher SAT scores. And a 2011 fMRI study conducted on 59 original participants—now in their 40s—by Cornell’s B.J. Casey showed higher levels of brain activity in the prefrontal cortex among those participants who delayed immediate gratification in favor of a greater reward later on. This finding strikes me as particularly important because of the research that’s emerged over the last two decades on the critical role played by the prefrontal cortex in directing our attention and managing our emotions.
As adults we face a version of the marshmallow test nearly every waking minute of every day. We’re not tempted by sugary treats, but by our browser tabs, phones, tablets, and (soon) our watches—all the devices that connect us to the global delivery system for those blips of information that do to us what marshmallows do to preschoolers.
Sugary treats tempt us into unhealthy eating habits because the agricultural and commercial systems that meet our nutritional needs today are so vastly different from the environment in which we evolved as a species. Early humans lived in a calorie-poor world, and something like a piece of fruit was both rare and valuable. Our brains developed a response mechanism to these treats that reflected their value—a surge of interest and excitement, a feeling of reward and satisfaction—which we find tremendously pleasurable. But as we’ve reshaped the world around us, radically diminishing the cost and effort involved in obtaining calories, we still have the same brains we evolved thousands of years ago, and this mismatch is at the heart of why so many of us struggle to resist tempting foods that we know we shouldn’t eat.
A similar process is at work in our response to information. Our formative environment as a species was information-poor as well as calorie-poor. The features of that environment—and specifically the members of our immediate community and our interactions with them—typically changed rarely and gradually. New information in the form of new community members or new ways of interacting were unusual and notable events that typically signified something of great importance. Just as our brains developed a response mechanism that prized sugary treats, we evolved to pay close attention to new information about the people around us and our interactions with them.
But just as the development of industrial agriculture and mass commerce has profoundly altered our caloric environment, global connectivity has profoundly altered our information environment. We are now ceaselessly bombarded with new information about the people around us—and the definition of “people around us” has fundamentally changed, putting us in touch with more people in an hour than early humans met in their entire lives. All of this poses a critical challenge to our brains—the adult version of the marshmallow test.
Not only are we constantly interrupted by alerts, alarms, beeps, and buzzes that tell us some new information has arrived, we constantly interrupt ourselves to seek out new information. We pull out our phones while we’re in the middle of a conversation with someone. We check our email while we’re engaged in a complex task that requires our full concentration. We scan our feeds even though we just checked them a minute ago. There’s increasing evidence suggesting that these disruptions make it difficult to do our best work, diminish our productivity, and contribute to a feeling of overwhelm.
It doesn’t help matters that trillion-dollar industries are dedicating some of their brightest minds and untold resources to come up with newer and better ways to capitalize on this mismatch between our neurological response to new information and our current information-rich environment. We are at the mercy of tremendously powerful and well-designed systems crafted with the express purpose of interrupting us and capturing our attention.
The agricultural and commercial revolutions were clearly net gains for humanity, making it possible for more people to live better lives than ever before, and it would be both wrongheaded and fruitless to suggest that we should somehow turn back the clock on these advances. Similarly, the information revolution is helping us to make great strides as a species, and I’m tremendously grateful for it.
But just as we need to be more thoughtful about our caloric consumption, delaying gratification of our impulsive urges in order to eat more nutritiously, we need to be more thoughtful about our information consumption, resisting the allure of the mental equivalent of “junk food” in order to allocate our time and attention most effectively. So what can we do?
Recognize the issue. Awareness is rarely sufficient to drive change, but it’s always the necessary first step. How often do you check your phone? Does this get in the way of other interactions? How often do you interrupt focused work to look at your inbox? Does this break your concentration or affect how long it takes to accomplish these tasks? How often do you scan various feeds? Does this result in wasted time? We face the marshmallow test constantly—are you passing or failing?
See the tools around us and exert some control over them. These interruptions are deliberately provoked by the designers of the tools we use. The best tools we use come to feel like features of the landscape or even extensions of our own body; we ultimately fail to see them as tools. What tools are you using? How are they interrupting you? How do they make it easy for you to interrupt yourself? What alarms and alerts might you disable? What limits might you place on the “convenient” features that contribute to these interruptions?
Manage our emotions and cultivate our capacity for mindfulness. No technical interventions will be enough unless we’re also willing to work on ourselves. Emotions are at the heart of this dynamic—the excitement and anxiety generated by new information are the fuel that drives us to interrupt ourselves over and over again, and any changes we seek to make will be contingent on our ability to access, understand and leverage these emotions rather than being impulsively driven by them. As I’ve written before, there’s no simple prescription for emotion management, but there are steps we can take: Regular physical activity and sufficient sleep are critical. Reflecting on our experiences through journaling or coaching conversations can help us understand and make sense of our emotional responses. And perhaps most importantly, even just a few minutes of meditation each day has been shown to have a powerful impact on our ability to sense our emotions and focus our attention.



You Need a Community, Not a Network
The internet is great for spreading information and rallying crowds, but you can’t mobilize people to collaborate and create something of lasting value simply by connecting them via the web. To get serious results from a network, you need commitment and a continuity of relationships among the participants.
To borrow language from the philosopher Avishai Margalit, the web is a “thin we” type of network. Participants tend to belong for individualistic reasons. They have little in common with other members, and they’re reluctant to do much for the network. A big goal requires a “thick we” network — a community of people who feel responsible for collaborating toward a shared purpose that they see as superseding their individual needs. Members of a community — as opposed to a simple network — expect relationships within the group to continue, and they even hold one another accountable for effort and performance.
When networks develop into communities, the results can be powerful. Look at the accomplishments of Wikipedia contributors, open-source software developers who find and fix bugs in Linux, or doctors who help each another with difficult diagnoses as part of the Sermo social network.
Now more than ever, businesses are reaching beyond the boundaries of their organizations, tapping experts, customers, and, more broadly, “the crowd” to build new products, services, and other solutions. But as corporate leaders join this trend, they should be mindful of the two types of networks. High performance comes when they are able to turn a thin, no-obligation, ephemeral network into something more — a real community.
Creating a community means getting participants to think and act collectively, to set aside (or find ways to align) their own interests in favor of a common purpose, and to accept a degree of accountability. Leaders of many nonprofits have shown themselves to be adept at this kind of community building. Consider Ashoka, which provides start-up funding and support for social innovators worldwide. Founder Bill Drayton led the way in fostering collaborative entrepreneurship for collective social change.
Building community in the for-profit world, which lacks a grand social mission, might seem daunting. But a number of companies have done it, and others are experimenting every day. In the 1990s, Fast Company magazine expanded rapidly, despite limited capital, because its founders inspired people to build a movement of change in the workplace. Instead of passive subscribers, it created “communities of friends” who generated new story ideas and collaborated with Fast Company staff to develop themes of the “new management revolution.”
At Best Buy in the early 2000s, Julie Gilbert was in charge of an internal network aimed at developing female leaders. Like its rivals, Best Buy tended to sell consumer electronics from a strongly male perspective. Eager to change that, Gilbert encouraged her group to reach out to women customers. Inspiring these consumers to help make electronics retailing more women-friendly, they gave what was at best a loosely connected network of customers some of the trappings of a community. Network members readily participated in focus groups and collaborated with Gilbert’s group to develop new programs and store designs. The initiative helped Best Buy increase sales and reduce return rates among female customers.
In 2008, Pfizer was struggling with an unwieldy and costly model of obtaining outside legal help. As Linda Hill and her colleagues have described, the company needed expert knowledge as it navigated the technically and ethically challenging waters of pharmaceutical development and marketing. But the specialist law firms that it employed rarely cooperated with one another or even shared much information. New general counsel Amy Schulman urged the firms to look past their immediate concerns and work toward cross-boundary collaboration. The goal was to help one another get smarter, while collectively providing superior solutions for Pfizer.
Under the new Pfizer Legal Alliance, the 19 participating firms agreed to switch from billable hours to an annual flat fee. The new structure encouraged cooperation, a shared governance model that built common understanding, and a new electronic communication platform that facilitated exchange — all of which benefited everyone, including Pfizer. Although each firm’s participation still depended on payments from Pfizer, the larger accomplishments of the alliance were possible only as the lawyers worked together for the common goals.
What can leaders do to turn a network into a community? First, those who do this well don’t let their lack of formal authority over participants keep them from actively leading. Thin-we networks can self-organize, but a sustained community requires strong guidance and nurturing so that people align with the larger goals. Effective community builders put the larger purpose front and center, stretching people with a higher vision.
Second, they couple the inspirational message with a real-world focus on performance. They keep everyone attuned to measurable results, achieved with collective action. Meanwhile, they’re pragmatic enough to engineer ways for people to gain some direct personal benefits along the way. These include gaining new professional skills and conventional networking, as well as the social fulfillment of being part of a group working together.
Finally, and most important, they make participants the real heroes. They push the responsibility for achievement firmly onto the shoulders of others. The more responsible participants feel, the more energy and drive they’ll offer — and the better they’ll feel about participating. There’s nothing so motivating as feeling that you’re really needed for something important.
In our emerging super-networked economy, the next-generation leaders will increasingly be mobilizers, not directors. These leaders will define their role not as “me” but “we” — and understand that when it comes to “we,” the thicker the better.



Rethinking the Bank Branch in a Digital World
More US bank branches closed in 2013 than ever before. More than 85% of retail banking transactions are now digital. The bank branch is “going south,” mobile-banking entrepreneur Brett King said to CNBC. “And there’s no reason to assume we’ll see a resurgence of activity at the branch—the mobile app is the nail in the coffin.”
So are we witnessing the death throes of brick-and-mortar retail banking? Will banking soon be like the business of selling recorded music—almost all done online?
In our view, no. Rather than going the way of Tower Records, leading banks are reinventing themselves with innovative mashups of digital technologies and physical facilities, a combination we call “digical.”
Here’s why. Banking isn’t like selling records or music CDs. A bank’s products and services are often complicated. Security and trust are paramount. Many people like to deal with a banker in person for certain kinds of transactions, such as taking out a mortgage or even just starting a banking relationship. Branches in the US accounted for roughly three-quarters of primary new account openings in 2013.
That may be why some leading banks are bucking the branch-closing trend. JP Morgan Chase, although a digital innovator, has opened 600 branches since January 2010 while closing 325. USAA Bank, which has a long history of industry-leading service without relying on a branch network, has steadily been opening physical service centers at key locations.
Of course, banking customers do want all the convenience of digital, such as electronic bill payments and instant deposits via smartphone. USAA Bank offers a voice-activated virtual assistant for mobile devices with a “tap to talk” feature that connects customers to a call center and allows agents to see what the customer was doing just before he or she called. Customers also expect seamless integration of digital and physical capabilities, so that every transaction in one channel shows up instantly everywhere else.
The most innovative banks have learned to provide customers with just this kind of fusion. Their transformation often starts with moving current capabilities online to make banking more convenient. It then proceeds along two complementary paths:
Creating “signature” experiences and new sources of value. Banks are beginning to focus on one or two key omnichannel experiences, such as buying a car or a home, as a way of engaging customers and setting themselves apart from the pack. Commonwealth Bank of Australia (CBA) collaborated with multiple listing database Domain.com.au to develop a mobile app that can search any house in Domain’s database for visual and written details. Customers can click through to get advice and start the mortgage application online; the bank’s mortgage advisors will then book the required in-person appointments, which many people prefer to do in a branch. CBA’s mobile payments capability allows customers to manage their mortgage balance through any channel, including mobile, online and ATM.
Reconfiguring the branch network. Many banks are reformatting branches rather than closing them outright. A recent Bain & Company study found that a number of leading banks are creating “hub” flagship branches that serve as showrooms for complex product sales and venues for providing trusted expert advice. “Spoke” branches provide basic services and sales capabilities, including video links to product specialists at the central office. Other banks are experimenting with “pop-up” branches or branches that combine a bank with a café. Fast-growing Wright-Patt Credit Union in Dayton, Ohio, has installed video tellers in its branches that are synced to the bank’s online platform. Customers using these tellers can transact routine business an estimated 33% faster than in the past.
The conclusion? Physical banking is evolving rapidly but not disappearing. Branches may be fewer in number, but they will be more useful and efficient, and banks without branches are likely to find themselves at a competitive disadvantage. Banking isn’t unique in this regard—in fact it’s quite typical. A Bain study of 20 broad industrial categories found that all have been affected by digital technologies to a greater or lesser extent. But only a few subcategories, such as selling compact discs in a store, have essentially been eliminated by digital.
The lesson for senior leaders is clear. When you hear the “everything’s going digital” alarm, ask yourself what your customers really want. Chances are they’re going to be seeking products and services that combine digital advances with the time-tested advantages of physical interactions.



Prevent Your Strategy Offsite from Being Meaningless
I was facilitating the two-day executive offsite of a mid-sized technology company. The goal of the meeting was to solve major issues and identify potential opportunities that would guide their efforts, as a company, for the next year.
We were halfway through the first day and, while everything was going according to plan, I couldn’t shake this nagging feeling that something wasn’t right. I struggled to put my finger on it.
I took in the scene. The CEO and all his direct reports were sitting around the board room table and everyone was engaged. People were being respectful, listening to each other without interrupting, asking clarifying questions, and moving efficiently from one presentation to the next. Everyone seemed satisfied; the presentations and conversations were useful and clear.
Because everyone seemed satisfied, I was hesitant to intervene. Still, something was off. I walked around the room to try to get different perspectives, to see the meeting through the eyes of each person. Finally, when I got to the CEO, and imagined the meeting from his vantage point, it clicked.
Taken one by one, each presentation was tight, well thought out, and deftly delivered. But if you took a bird’s eye view, you’d see utter chaos.
Each person, representing a different part of the company, had his or her own priorities, concerns, agenda, and goals which weren’t aligned with – or in some cases were directly opposed to – the next person’s. No one had the whole company perspective in mind. No one was working within a single, overarching, companywide strategy.
If I were to graphically depict this meeting, with each person’s objectives, projects, and priorities symbolized by little arrows, it would look like this:
Each leader was thinking about his or her arrows – their piece of the company – but no one was focused on the company as a whole.
If each leader were running an independent company, it would be fine. But they weren’t. A decision in R & D affects Engineering, Manufacturing, Marketing, and Sales. And if Sales decides to focus on different customers, that affects Support as well as Marketing and even HR – whom you hire and how you manage and pay them might be different.
Here’s the thing: these were all smart, competent, highly educated, experienced leaders. It’s not that they didn’t understand the importance of a solid unified strategy. It’s not even that they didn’t have one. It’s just that, amid all the day-to-day challenges and tempting opportunities, they were neglecting it.
What they needed was a reminder.
After the next presentation was complete, I asked to pause the meeting and I drew the random set of small arrows on a flip chart. Then I drew a single, big arrow in the middle of them, so the drawing looked like this:
“All these presentations make perfect sense and represent sound strategies if taken independently,” I said, “But they’re not aligned as an integrated whole with the strategy that we articulated so carefully many months ago.”
“I want to remind us of our big arrow: the direction we deliberately chose to move as a company. Our overarching strategy. The big arrow represents where the company is going. It contains our priorities, our brand, and the definition of our success. We need to review the decisions we’re making from that perspective, so the little arrows align with the big arrow. We need to identify what’s distracting and what’s strategic.”
I started crossing out some arrows and redirecting others. “The implications of this are real; some projects will be stopped, others changed drastically, and some, possibly, moved a bit.”
It got so messy that I just ripped off that page and drew a new, clean image on the flip chart:
“This is how we should be moving forward as a senior leadership team, together, supporting each other and the larger company.”
They agreed to review the basic tenets of their strategy. We discussed their brand, the kind of customers they wanted to serve and acquire, the products they were optimally positioned to engineer and manufacture, and the outcomes they wanted to produce over the next year.
The entire conversation took 15 minutes.
It went so quickly because they weren’t designing a new strategy, they were just reminding themselves of the well-thought-out strategy they had already developed.
Then we got to the most challenging work: Making decisions. It’s challenging because it demands courageous choices about priorities. Which opportunities are we willing to forgo? Which problems could we not afford to ignore?
They nudged and shifted their little arrows in light of the big arrow. A few projects got cancelled as distractions. Some of the conversations were heated and some people got defensive. But the conversation was tremendously productive, always respectful, and clearly focused on the big arrow.
As Lewis Carroll wrote in Alice in Wonderland, “If you don’t know where you are going, any road will get you there.” The challenge for leaders is that, while we often know where we’re going, it’s easy to get distracted. Two things are helpful to stay on track:
The big arrow. Every time you meet to discuss opportunities, address challenges, solve problems, or think through a particular decision, spend a few minutes revisiting the big arrow first. Start every strategy meeting with your big arrow. Remind yourself of the overarching priorities, direction, and boundaries of the company as a whole.
The big arrow sets the direction — and forms the boundaries — to answer the critical question: Where should we spend our time? And it serves as a decision making filter to assess the viability and productivity of each decision: Does this solution help us move forward in the overarching focus of the organization?
Emotional courage. Making the hard, sometimes painful, decisions required to align your little arrows with the company’s big arrow is one of the most important jobs of a leader. It’s also the most emotionally challenging. Can you say no to that tempting opportunity – you know, the one that your customers will love and will clearly be profitable – if it doesn’t align with your big arrow? Can you give up something that’s clearly in your best interests — it might even increase your bonus at the end of the year – if it’s not in the best interests of the company?
This is hard, but that’s what leadership calls us to do. And , ultimately, is what will make everyone – you, your colleagues, and the company as a whole – most successful.



LEGO’s Girl Problem Starts with Management
This summer, LEGO launched a minor revolution. It introduced professional women – scientists, no less – into its latest toy line aimed at girls. The new figurines – called “minifigs” by Lego die-hards – feature a female palaeontologist, an astronomer, and a chemist. They sold out on the first day.
This, after years of mediocre “pink” products that did little to grow Lego’s share of the girls’ toy aisle.
Why did it take until 2014 for the world’s second-largest toy maker to offer girls (and their toy-buying parents) products they might actually want? (After all, even Barbie has been an astronaut since 1965.)
Perhaps it has something to do with the profile of LEGO’s management team, comprised almost entirely of men. The three-person board of the privately-held company is all men, led by CEO Jørgen Vig Knudstorp. The 21-person corporate management team has 20 men and one woman – and she’s in an internally-facing staff role, not connected to the customer base or product development. When your leadership isn’t gender-balanced, it’s tough to have a balanced customer base. The new ‘Research Institute’ range was proposed by geoscientist Ellen Kooijman on one of the company’s crowd-sourcing sites. But it begs the question, is there really no one inside the company who might have come up with the radical idea of having women scientists feature in a 21st century toy company’s line?
The debate has been raging about toys and gender for decades. Should toys be gender neutral, or targeted differently at boys and girls? Lego’s somewhat tumultuous journey here will be familiar to anyone in a company struggling to tap into the female half of the market.
Family-owned LEGO toys used to be staunchly gender neutral – as self-professed Lego geek David Pickett exhaustively demonstrates. The early advertisements featured both boys and girls playing with identical toys. When minifigs were first introduced in the late 70s – the era of androngyny – gender was downplayed, and the 80s were a golden age for the company. But between the late ‘80s and early ‘00s, the company launched a stream of product lines aimed at girls, none particularly successful and most heavily anchored in pink. These weren’t toys that boys and girls could play with – the company was now making one set of toys for boys (which were often more interesting and challenging to build) and one set of pink, simplified products for girls, including a jewelry line and dollhouses. As Pickett points out, many of these pieces weren’t even compatible with the majority of Legos (i.e., the boy Legos) – and interchangeability is the whole value proposition of the Lego system.
Things finally got so bad that in 2004, Lego almost went bust. The first non-family CEO took over and turned the company around. LEGO rebounded through disciplined management – which included cutting back the unpopular line of pink toys, and making its products more macho. Lego’s customers were 91% male by 2012, when the company released girl-targeting Lego Friends after “four years of research.” What are Lego Friends? Essentially a gaggle of girls who live in Heartland, wear a lot of pastels, and hang out in a salon or at a pool. And, like Lego’s unsuccessful line of pink toys from the 90s, these figures are much less functional than the boys’ toys. In a sad and metaphorical twist, the male minifigs can drive cars, run, and hold tools. The female minidolls can’t move their hands. They can only sit, stand, or bend over.
And yet the company multiplied sales to girls threefold. So why argue with success?
For the same reason that any male-dominated company’s ‘pink’ strategies are limited in both scope and impact. You can build a female niche, you can make money off it – but wouldn’t the company make far more money if it doubled its existing market size, rather than incrementally improving a strategy that is not hugely popular with a wide swathe of public opinion, parents, and educators?
Moreover, the girls who want to play with dolls and accessories are probably not Lego’s target market. Doesn’t Barbie have a lock on those girls? Why not create something for the “other” girls entirely ignored by the majority of the tyrannically pink toy aisle? (In business, this is what we call a market opportunity.) Perhaps because the male-dominated LEGO company doesn’t see girls as a massive market containing a multiplicity of profitable niches – they see ‘girls’ as a single niche market in and of themselves. This is what the phone companies like Siemens and Nokia used to do with their range of pink ladies’ phones before Apple blew them out of the water with a gender ‘bilingual’ iPhone that integrated the preferences of both genders to make their market 50/50 gender balanced. That’s where the gold mine lies.
So let the immediate, sold-out market response to the timid introduction of the three new figurines be a message to the gentlemen at the table. Be bold! Innovate! Think outside last century’s box. Invite some of these innovative female Lego-lovers onto your board or into your top team.
Don’t hold your breath, though. Despite its first-day sold-out success, LEGO has decided not to continue the Research Institute line. It was only a “limited edition.” So girls, back to the pool. The guys in this boardroom don’t seem to want to give you any ideas… let alone seats at the table.
LEGO’s corporate mantra is “only the best is good enough.” In 2014, as the most ambitious, educated, and employed generation of girls the world has ever seen heads back to school, I suspect most girls might not agree that the company is making their own grade.



People’s Creative Output Depends on the Initial Stimulus
In a series of experiments, the novelty of people’s creative output was affected by the novelty of the raw materials they were initially exposed to, says Justin M. Berg of The Wharton School. For example, students who were asked to come up with product ideas for a university bookstore tended to produce ideas that were rated higher in novelty (3.82 versus 3.05 on a 7-point novelty scale) if they were first shown a fishing pole rather than a whiteboard. Conversely, participants’ output tended to be more useful and less novel if they initially saw less-novel items, Berg says.



How Cities Are Using Analytics to Improve Public Health
From clean water supplies to the polio vaccine, the most effective public health interventions are typically preventative policies that help stop a crisis before it starts. But predicting the next public health crisis has historically been a challenge, and even interventions like chlorinating water or distributing a vaccine are in many ways reactive. Thanks to predictive analytics, we are piloting new ways to predict public health challenges, so we can intervene and stop them before they ever begin.
We can use predictive analytics to leverage seemingly unrelated data to predict who is most susceptible to birth complications or chronic diseases or where and when a virulent outbreak is most likely to occur. With this information, public health officials should be able to respond before the issue manifests itself – providing the right prenatal treatments to mitigate birth complications, identifying those most likely to be exposed to lead or finding food establishments most at risk for violations. With this information, data becomes actionable. Predictive analytics has the potential to transform both how government operates and how resources are allocated, thereby improving the public’s health.
While the greatest benefits have yet to be realized, at the Chicago Department of Public Health (CDPH), we are already leveraging data and history to make smarter, more targeted decisions. Today, we are piloting predictive analytic models within our food protection, tobacco control policy, and lead inspection programs.
Recently, CDPH and the Department of Innovation and Technology engaged with local partners to identify various data related to food establishments and their locations – building code violations, sourcing of food, registered complaints, lighting in the alley behind the food establishment, near-by construction, social media reports, sanitation code violations, neighborhood population density, complaint histories of other establishments with the same owner and more.
The model produced a risk score for every food establishment, with higher risk scores associated with a greater likelihood of identifying critical violations. Based on the results of our pilot and additional stakeholder input, we are evaluating the model and continue to make adjustments as needed. Once it is proven successful, we plan to utilize the model to help prioritize our inspections, and by doing so, help improve food safety.
To be clear, this new system is not replacing our current program. We continue to inspect every food establishment following our current schedule, ensuring the entire food supply remains safe and healthy for our residents and tourists. But predictive analytics is allowing us to better concentrate our efforts on those establishments more likely to have challenges. In time, this system will help us work more closely with restaurateurs so they can improve their business and decrease complaints. In short, businesses and their customers will both be happier and healthier.
Building on the work of the food protection predictive model, we developed another key partnership with the Eric & Wendy Schmidt Data Science for Social Good Fellowship at University of Chicago (DSSG) to develop a model to improve our lead inspection program.
Exposure to lead can seriously affect a child’s health, causing brain and neurological injury, slowed growth and development, and hearing and speech difficulties. The consequence of these health effects can be seen in educational attainment where learning and behavior problems are often the cause of lower IQ, attention deficit and school underperformance. Furthermore, we’ve seen a decrease in federal funding over the past several years for our inspectors to go out and identify homes with lead based paint and clearing them. But thanks to data science, we are now engaging on a project where we can apply predictive analytics to identify which homes are most likely to have the greatest risk of causing lead poisoning in children – based on home inspection records, assessor value, past history of blood lead level testing, census data and more.
Predictive models may help determine the allocation of resources and prioritize home inspections in high lead poisoning risk areas (an active approach), instead of waiting for reports of children’s elevated blood lead levels to trigger an inspection (the current passive approach). An active predictive approach shortens the amount of time and money spent in mitigation by concentrating efforts on those homes that have the greatest risk of causing lead poisoning in children.
Incorporating predictive models into the electronic medical record interface will serve to alert health care providers of lead poisoning risk levels to their pediatric and pregnant patient populations so that preventive approaches and reminders for ordering blood lead level lab tests or contacting patients lost to follow-up visits can be done.
There is a great opportunity in public health to use analytics to promote data-driven policies. We need to use our data better, share it with the public and our partners, and then leverage that data to create better policies, systems and environmental changes.
Public institutions should increasingly employ predictive analytics to help advance their efforts to protect the health of their residents. Furthermore, large, complex data sets should be analyzed using predictive analysis for improved pattern recognition, especially from diverse data sources and types, ultimately leading to significant public health action. For the Chicago Department of Public Health, predictive analytics is not the future, it is already here.



September 12, 2014
Why the Apple Watch Is a Gift to the Swiss Watch Industry
The launch of the Apple Watch this week has raised questions about its impact on the Swiss watch industry. Contrary to Apple designer Jony Ive’s remarks that the Swiss watch could be in trouble, there are several reasons why the Swiss have nothing to fear from Apple’s success.
First, the Apple Watch makes wearing a watch relevant to a new generation of future watch collectors. I often ask other professors around the world how many of their students wear watches. The answer is always the same: “very few.” For many young adults who have grown up using their cell phone to tell time, the idea of wearing a watch is the equivalent of sending a telegraph or storing data on a floppy disk.
The Apple Watch introduces the concept of wearing a watch to many of Apple’s 18 to 35 target market. If it takes off, it is likely that these buyers will eventually consider purchasing other types of watches for events later in life. Talk to any Swiss watch executive today and they will tell you many of their best clients started out collecting Swatches in the 1980s, but eventually started purchasing more expensive brands such as Rolex, Blancpain, Breguet, or Audemars Piguet later in life. Like the Swatch, it is quite possible that the Apple Watch could spark a new generation of watch aficionados and collectors.
A similar phenomenon has recently occurred in the book industry. When Amazon introduced its Kindle eBook reader in 2007, many analysts predicted it marked the end of traditional bookselling. However, over the last five years independent bookstores have seen a resurgence in sales and in the number of stores, all the while selling traditional printed books. One of many reasons for this revival is that booksellers have benefited from continued demand for children’s books, which remain near the top of the fastest growing segments in the publishing industry. When parents and grandparents buy books to read to children at bedtime, they introduce the printed book to a new generation of potential users. As these children have grown up, data show they have been less likely to abandon the printed book in favor of the Kindle. In fact, most readers are happy to read both.
Second, the Apple Watch is likely to be a complement rather than a competitor to the Swiss watch. The Apple Watch is chock full of technological wonders that would be the envy of Dick Tracy, while Swiss watches are primarily luxury goods and status symbols. Apple is confident it will be able to reinvent its core technology every 6 to 12 months before competitors like Samsung attempt to render it obsolete. Swiss watchmakers, on the other hand, see themselves as craftspeople producing wearable art meant to be passed down from generation to generation.
The Swiss stopped competing for technological watchmaking supremacy in the 1980s when Japanese watch manufacturers like Casio and Seiko began producing far cheaper and more accurate quartz watches compared to their handmade mechanical timepieces. Within a decade of inventing the first quartz watch, the Swiss saw their export volume decrease from 45% to 10% of watches produced globally. By 1983, two-thirds of all watch industry jobs in Switzerland had vanished and over half of all watchmaking companies in Switzerland had gone bankrupt.
Thanks to the efforts of individuals like former Swatch Group chairman Nicolas G. Hayek and LVMH watch president Jean-Claude Biver (who oversees Hublot, Tag Heuer and Zenith), the Swiss watch industry cleverly repositioned its mechanical wonders as luxury goods. Unlike the $350 price tag suggested for a new Apple Watch, most of the Swiss watch industry’s meteoric growth over the last two decades has come from watches priced well over $10,000. The Swiss watch industry no longer competes on the same dimensions that will drive Apple Watch sales.
Third, Apple and Swiss watchmakers have this in common: they are deeply committed to connecting their product with the consumer on a personal level. During Tuesday’s launch event, Apple CEO Tim Cook touted the Apple Watch as the “most personal device we’ve ever created.” The beauty of the Apple Watch is that it can track people’s micro-movements and provide instant data to help wearers make sense of how they engage with the world around them. Similarly, while conducting research on the re-emergence of the Swiss watch industry, I interviewed a prominent Swiss watch CEO who said, “Your watch is part of you. The watch is you. It shows the type of personality you have: Are you elegant?, unique?, rich?, arrogant?, sporty?… all these elements are transmitted through your watch.”
The Swiss watch industry can be confident that a sufficient number of well-to-do and tech-savvy Apple Watch wearers will continue to pine for the highest end handmade timepieces.
The Apple Watch may keep perfect time, but it is not timeless.



3 Reasons to Kill Influencer Marketing
Marketers like to repeat the quote, “I know I waste half of my ad budget, I just don’t know which half.” No one knows who first said it—it’s been attributed to a number of people—but the fact that it gets repeated so often is testament to how strongly it resonates.
So it shouldn’t be surprising that marketers like the idea of “influentials,” seemingly ordinary people who determine what others think, do and buy. A recent study of 1300 marketers found that 74% of them planned to invest in influencer marketing over the next 12 months.
However, there’s good reason to believe that it’s all a waste of time and effort. While the idea of influentials may be intuitively convincing, there is very little, if any, evidence that they actually can improve performance—or even exist at all. So before you embark on another influencer campaign, consider that these three reasons why it’s a waste of time and money.
1. It’s the wrong metaphor. Malcolm Gladwell is probably the person most responsible for the massive interest in influencer marketing. It was he who, in his blockbuster book, The Tipping Point, laid out his now famous “Law of the Few,” which he stated as:
The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.
The idea of influentials makes intuitive sense because we all know people like the ones Gladwell described in his book: “Connectors” who seem to know everyone, “mavens” who possess deep domain knowledge and “salesmen” who have the gift of gab. We’ve seen how they’ve influenced us, so it seems plausible that they play a role in spreading ideas.
Yet social epidemics aren’t local phenomena. They are long viral chains. Just because someone might be good at getting an idea across, doesn’t mean that others are more likely to share the idea. And if an idea doesn’t get shared, it doesn’t travel far.
A more accurate metaphor would be a wave at a stadium. What “special traits” would it take to affect thousands of people throwing their arms up in sequence? Could a 400 pound man do it? If Jack Nicholson refused to stand up at a Lakers game, would a wave stop in its tracks? Not likely. Collective behavior requires a collective.
2. Science finds little evidence to support influencer marketing. While Gladwell’s book certainly did much to popularize the notion of influentials, the idea is not exactly new. In fact, it goes back to research done by Katz and Lazarsfeld, two prominent sociologists, in the 1940’s and 50’s. Yet even in their original study, they found that influence was highly contextual.
Recent research raises even more serious questions about the influentials hypothesis. In one study of e-mails, it was found that highly connected people weren’t necessary to produce a viral cascade. In another, based on Twitter, it was found that they aren’t even sufficient. So called “influentials” are only slightly more likely to produce viral chains.
Duncan Watts, a researcher at Microsoft who co-created one of the most important models of how social networks function, says, “The influentials hypothesis is a theory that can be made to fit the facts once they are known, but it has little predictive power. It is at best a convenient fiction; at worst a misleading model. The real world is much more complicated.”
The empirical evidence is clear: It’s time to debunk the myths about influentials. Unless someone, somewhere, can produce evidence that these “special” people can further our marketing campaigns more efficiently than other approaches, we shouldn’t waste money chasing them.
3. Recent events should remind us how precarious influence is.
So far, we’ve seen that the idea of influentials isn’t as intuitively appealing as it first seems. We’ve also seen that scientific evidence contradicts the viability of influencer marketing. Yet intuition is always fallible and scientific studies, even if rigorously and carefully undertaken, can be wrong. Real life doesn’t always align with what happens in controlled experiments.
But there is another reason to doubt the idea of influentials: recent events and common sense. We’ve seen powerful social epidemics erupt in the Arab Spring, the Euromaidan protests in Ukraine and the 2004 Orange Revolution that preceded it. Small, loosely connected groups overthrew powerful regimes.
Now, it hardly makes sense that Hosni Mubarak and Viktor Yanukovych, who controlled the media and the major organs of power, lacked influence or access to influential people. Yet they were powerless to stop the street protests that eventually brought about their downfall.
It is, of course, possible that the protestors were driven by people with “rare social gifts” that trumped the dictators’ more traditional influence, but then why did those gifts fail them in the aftermath? In Egypt, the Muslim Brotherhood, not the largely liberal protesters, prevailed in the sunsequent elections. In Ukraine, the Pora movement never evolved into a political force.
****
The fundamental problem with influencer marketing is not that some people aren’t more influential than others, but that there is little, if any, evidence that influencer strategies—other than celebrity endorsement—are viable. Yet all is not lost. There is a way to consistently increase the likelihood of viral chains.
In 2001, Jonah Peretti had an e-mail exchange with Nike that went viral on the Web. He was fascinated and, a year later, he met Duncan Watts at a conference. The two struck up a friendship and then a collaboration. They did a number of projects together that had promising results, which they published in Harvard Business Review.
Their approach, which they called big seed marketing, does not rely on identifying a small number of special people, but rather on harnessing the power of a large number of ordinary people. By reaching a mass audience, and encouraging them to share, you increase the likelihood that a viral chain emerges and, even if it doesn’t, you still improve performance.
Peretti went on to co-found Huffington Post, which was sold to AOL for $315 million in 2011. His second company, Buzzfeed, is now valued at $850 million. In his extended interview with Felix Salmon, he credits not influentials, but “a constellation of connected things” for making his articles go viral so consistently.
So if you want things to spread, forget about special people with “rare qualities.” Be interesting, reach as many people as you can and encourage them to share.



Your Company’s Energy Data Is an Untapped Resource
Most companies are unprepared for the emerging revolution in predictive energy analytics. In fact, many readers’ eyes will have already glazed over at the preceding sentence, with the natural initial reaction that energy-related data isn’t relevant to their jobs.
But what happens when every single light fixture in all of your company’s facilities becomes a networked mini-computer with an array of sensors? Who at your company will be put in charge of turning buildings operations from a cost center to a revenue center? These examples are not hypothetical capabilities; these are now real options for companies. And yet few corporate managers are asking such questions, much less taking advantage.
Cost Savings
Chances are, energy-related spending has a significant impact on your company’s profitability. There are over five million commercial and industrial facilities in the U.S. alone, according to the US. Energy Information Administration, with a combined annual energy cost of over $200 billion. The U.S. EPA estimates that around 30% of that energy is used inefficiently or unnecessarily. And many companies also face additional energy-related costs from their commercial vehicles, of which there are over 12 million in operation in the U.S. according to IHS, incurring fuel costs in the billions annually.
So there are some big potential savings out there to be gained, but for most companies the responsibility for capturing them is relegated to facilities and fleet managers. Furthermore, many of these managers are focused more on productivity and safety goals than energy savings, nor are they allocated budgets to acquire new energy-saving systems even when paybacks would be compelling. And of course, few such managers have a background in information technology.
But as computing and networking costs have fallen over the past few decades, it has opened up a host of new ways that data and IT could be applied to drive significant cost savings in company-owned buildings and vehicle fleets. Startups like First Fuel and Retroficiency are able to perform “virtual energy audits” by combining energy meter data with other basic data about a building (age, location, etc.) to analyze and identify potential energy savings opportunities. Many Fortune 500 companies have also invested in “energy dashboards” such as those offered by Gridium and EnerNOC, among numerous others, which give them an ongoing look at where energy is being consumed in their buildings, and thus predict ways to reduce usage.
Many companies use telematics (IT for vehicles) to track their fleets for safety and operational purposes, and some startups are now using these capabilities to also help drive fuel savings. XLHybrids, for instance, not only retrofits delivery vehicles with hybrid drivetrains for direct fuel savings, they also provide remote analysis to help predict better driving patterns to further reduce fuel consumption. Transportation giants like FedEx and UPS already use software-based optimization of fleet routes with cost savings in mind.
Operational Improvements
The benefits of tracking energy usage aren’t limited just to energy savings. Because energy usage is an integral part of all corporate facilities and operations, the data can be repurposed for other operational improvements.
Take lighting, for example. Boston-based Digital Lumens offers fixtures for commercial and industrial buildings that take advantage of the inherent controllability of solid-state lighting, by embedding intelligence and sensors and adjusting consumption based upon daylight levels, occupancy, and other inputs to drive energy savings of 90% or more. But along the way to achieving these direct energy cost reductions, many of their customers find additional benefits from having a network of data-gathering mini-computers all over their facilities. For example, manufacturers and warehouse operators who’ve installed Digital Lumens systems have the ability to generate “heat maps” showing which locations in their facilities get the most traffic, which allows the facilities managers to reposition equipment or goods so that less time is wasted by workers moving around unnecessarily. And now retailers are starting to leverage the same information to better position higher-margin product where traffic is highest within their stores.
Another use of energy data is in predictive maintenance. When a critical piece of equipment breaks in a commercial setting, it can have a significant financial impact. If the refrigerator compressor breaks in a restaurant, for instance, it can force a halt to operations of the entire facility. But often, long before such equipment fully stops working, the early signs of a problem can be discerned in its energy usage signal. Startups like Powerhouse Dynamics and Panoramic Power are finding that their small-commercial customers get as much value out of such fault-detection and predictive maintenance as the get out of the overall energy monitoring services their systems are designed to provide.
Don’t have a capital budget for energy savings projects? Well, other companies like SCIenergy and Noesis are now using predictive analytics to help underwrite energy-efficiency loans and even more creative financing which helps companies capture savings from day one, in some cases even guaranteeing system performance.
New Sources of Revenue
What really has the potential to radically change how corporate managers view predictive energy analytics, however, is how it can be used to turn existing “cost centers” into sources of new, high-margin revenue.
Electric utilities must keep the grid balanced at all times, and this challenge is only growing more acute. They can expensively purchase power from other sources at times of high demand, but it’s often better for them to avoid such peaks by reducing consumption when needed. Thus, many such utilities are willing to pay commercial customers to participate in so-called “demand response” or “frequency regulation” programs in which customers periodically reduce their electricity usage so the utility doesn’t have to bring another power plant online.
Imagine a big box retail store in the future: It has solar panels on the roof. A large-scale battery in the basement. Plus an intelligent load-control software system that deploys the battery’s power as needed, and also adjusts the air conditioning, lighting, and other energy-consuming devices in the building in incremental ways so that when such loads are shifted around minute to minute, no one in the building feels any impact on comfort or operations. The combination of these systems would not only reduce the facility’s bill from the local electric utility, it would also enable the building to automatically participate in that utility’s demand response program and generate revenue.
Does this sound like a pipe dream? Seattle-based Powerit Solutions offers such intelligent automation today, and they already control 800 megawatts of load in the marketplace.
Unfortunately, most corporations aren’t making the necessary investments in energy data analytics — they’re not providing budgets or the cross-functional teams to identify the available cost savings, much less the new revenue opportunities. To be done right, integrating such solutions into the enterprise requires not just knowledge about buildings, but also IT and financial leadership. The effective “facilities management” team of the future will have all of these capabilities. Leading companies across all industries will have to start viewing energy data analytics as a core shareholder value activity, prioritizing it accordingly.
(Disclosure: Black Coral Capital, where I am a partner, is an investor in Digital Lumens, Noesis, and Powerit.)



Marina Gorbis's Blog
- Marina Gorbis's profile
- 3 followers
