David McRaney's Blog, page 30

February 26, 2015

YANSS 044 – James Burke on the Coming Age of Scarce Scarcity and Abundant Abundance

Screen Shot 2014-10-14 at 3.02.37 PM


The Topic: The Future


The Guests: James Burke and Matt Novak


The Episode: DownloadiTunesStitcherRSSSoundcloud – Transcript




This episode is brought to you by The Great Courses. Order Your Deceptive Mind or another course in this special offer and get 80% off the original price.


Episode 44 is a rebroadcast of two interviews from episode 20 which was all about how we are very, very bad at predicting the future both in our personal lives and as as a species.


Thanks to your support on Patreon, you can now read a transcript of my interview with James Burke from that episode. More transcripts are on the way. I hope to add about four a month. This is a link to the James Burke transcript.


JamesBurkeSmallJames Burke is a legendary science historian who created the landmark BBC series Connections which provided an alternative view of history and change by replacing the traditional “Great Man” timeline with an interconnected web in which all people influence one another to blindly direct the flow of progress. Burke is currently writing a new book about the coming age of abundance, and he continues to work on his Knowledge Web project. In the interview, James Burke says we must soon learn how to deal with a world in which scarcity is scarce, we are more connected to our online communities than our local governments, and home manufacturing can produce just about anything you desire.


mattnovakWe also sit down with Matt Novak, creator and curator of Paleofuture, a blog that explores retro futurism, sifting through the many ways people in the past predicted how the future would turn out, sometimes correctly, mostly not.


Together, Burke and Novak help us understand why we are to terrible at predicting the future and what we can learn about how history truly unfolds so we can better imagine who we will be in the decades to come.


Screen Shot 2014-10-14 at 3.02.37 PM



Links and Sources:


The Episode: DownloadiTunesStitcherRSSSoundcloud – Transcript


Show Transcript


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


James Burke’s Connections


James Burke’s Connections – Book


James Burke’s Knowledge Web


Paleofuture at Gizmodo


Paleofuture Original Site

 •  0 comments  •  flag
Share on Twitter
Published on February 26, 2015 13:15

YANSS 044 – James Burke on the Coming Age of Scarce Scarcity and Abundant Abundance (Rebroadcast)

Screen Shot 2014-10-14 at 3.02.37 PM


The Topic: The Future


The Guests: James Burke and Matt Novak


The Episode: DownloadiTunesStitcherRSSSoundcloud – Transcript



This episode is brought to you by The Great Courses. Order Your Deceptive Mind or another course in this special offer and get 80% off the original price.


Episode 44 is a rebroadcast of two interviews from episode 20 which was all about how we are very, very bad at predicting the future both in our personal lives and as as a species.


Thanks to your support on Patreon, you can now read a transcript of my interview with James Burke from that episode. More transcripts are on the way. I hope to add about four a month. This is a link to the James Burke transcript.


JamesBurkeSmallJames Burke is a legendary science historian who created the landmark BBC series Connections which provided an alternative view of history and change by replacing the traditional “Great Man” timeline with an interconnected web in which all people influence one another to blindly direct the flow of progress. Burke is currently writing a new book about the coming age of abundance, and he continues to work on his Knowledge Web project. In the interview, James Burke says we must soon learn how to deal with a world in which scarcity is scarce, we are more connected to our online communities than our local governments, and home manufacturing can produce just about anything you desire.


mattnovakWe also sit down with Matt Novak, creator and curator of Paleofuture, a blog that explores retro futurism, sifting through the many ways people in the past predicted how the future would turn out, sometimes correctly, mostly not.


Together, Burke and Novak help us understand why we are to terrible at predicting the future and what we can learn about how history truly unfolds so we can better imagine who we will be in the decades to come.


Screen Shot 2014-10-14 at 3.02.37 PM


Links and Sources:


The Episode: DownloadiTunesStitcherRSSSoundcloud – Transcript


Show Transcript


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


James Burke’s Connections


James Burke’s Connections – Book


James Burke’s Knowledge Web


Paleofuture at Gizmodo


Paleofuture Original Site

 •  0 comments  •  flag
Share on Twitter
Published on February 26, 2015 13:15

February 11, 2015

YANSS 043 – The Science of Misremembering with Julia Shaw and Daniel Simons

The Topic: Misremembering


The Guests: Julia Shaw and Daniel Simons


The Episode: DownloadiTunesStitcherRSSSoundcloud


 


Blurry Williams Brian


This episode is brought to you by The Great Courses. Order Behavioral Economics or another course in this special offer and get 80% off the original price.


This episode is also brought to you by Shari’s Berries. Order some delicious dipped strawberries for Valentine’s Day and get 40% off or a double order for $10 more using the special code “delusion” by clicking the microphone at this link


If you’d like to support the show directly, now you can become a patron! Head over to the YANSS Patreon Page for more details.


Did Brian Williams lie, exaggerate, or misremember?


If he originally reported the truth behind the events in Iraq more than a decade ago, and those events were filmed and broadcast on the nightly news, then why didn’t he fact-check himself before going on national television and recounting a false version of those same events? Surely, as a journalist, he knew the original video was out there for anyone to watch.


DanielSimonsIn the first segment of this episode of the YANSS Podcast, psychologist Daniel Simons explains that although we will never know for sure if Brian Williams intentionally mislead people in the many retellings of his adventures in the desert, the last 40 years of memory research strongly suggests the kind of misremembering he claims to have suffered is easy to reproduce in our own lives. In fact, chances are, giant swaths of your own personal history are partially fictional if not completely false. The problem isn’t that our memory is bad, but that we believe it isn’t.


JuliaShawOur in-depth interview in this episode is with psychologist Julia Shaw whose latest research demonstrates the fact that there is no reason to believe that a memory is more accurate just because it is vivid or detailed. Actually, that’s a potentially dangerous belief. Shaw used techniques similar to police interrogations, and over the course of three conversations she and her team were able to convince a group of college students that those students had committed a felony crime. You’ll hear her explain how easy it is to implant the kind of false memories that cause people just like you to believe they deserve to go to jail for crimes that never happened and what she suggests police departments should do to avoid such distortions of the truth.


After the interview, I discuss a news story about implanting false memories into the brains of mice using viruses and beams of light.


In every episode, before I read a bit of self delusion news, I taste a cookie baked from a recipe sent in by a listener/reader. That listener/reader wins a signed copy of my new book, “You Are Now Less Dumb,” and I post the recipe on the YANSS Pinterest page. This episode’s winner is Michelle Brigham who submitted a recipe for lemon zucchini cornmeal cookies. Send your own recipes to david {at} youarenotsosmart.com.


LemonZucchini


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


Julia Shaw


Daniel Simons


Constructing Rich False Memories of Committing A Crime


How Not to Be the Next Brian Williams


Brian Williams Admits He Wasn’t on Copter Shot Down in Iraq


With an Apology, Brian Williams Digs Himself Deeper in Copter Tale


Why Our Memory Fails Us


Do politicians lie, or just misremember it wrong?


Fake Memory Implanted in Mice with a Beam of Light


Original Photo Credit: David Shankbone – CC 3.0


 

 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2015 08:48

January 28, 2015

YANSS 042 – Reducing Unconscious Biases and Prejudices With Rubber Hands and Virtual Reality

Screen Shot 2014-10-14 at 3.02.37 PM


The Topic: Bodily Resonance


The Guest: Lara Maister


The Episode: DownloadiTunesStitcherRSSSoundcloud


futuramabrainchange


This episode is brought to you by The Great Courses. Order Behavioral Economics or another course in this special offer and get 80% off the original price.


One of the more unsettling recent scientific discoveries is the fact that your behavior is influenced every day by unwanted, unconscious social and cultural biases.


Sure, you accept that some people think in certain ways that you don’t because they’ve absorbed cultural norms that you didn’t, but what about your own mind? It can seem as if once you’ve recognized your own contributions to racism and privilege you should then be able to proceed with a clean slate, rebooted with the awareness of your own ignorance, but free from it.


The evidence suggests it isn’t that easy. The desire alone doesn’t seem to remove prejudice from your thoughts and actions. In experiments where subjects were asked to identify an image within two seconds and to mark it as either a gun or a tool, subjects were much more likely to mistake tools for guns if they first saw a black face before making the call. If shown a white face beforehand, those same people made the mistake in reverse, mislabeling guns as tools. In another line of research, scientists found that people trying to make fair and unbiased decisions in the justice system are just as susceptible. Those researchers wrote that in court cases “involving a white victim, the more stereotypically black a defendant is perceived to be, the more likely that person is to be sentenced to death.”


The seeds of bigotry and xenophobia were planted in your brain long ago, and though you can consciously desire to be unbiased when it comes to race, religion, age, politics, and all the other social phenomena that glom people together – those things have already molded the synaptic landscape in your head. Undoing that in an effort to reduce prejudice will take time. The good news is that neuroscientists are, right now, working on how that undoing might be accomplished at the individual level.


Lara MaisterIn this episode of the You Are Not So Smart Podcast we sit down with cognitive neuroscientist Lara Maister who has married two fascinating and somewhat bizarre lines of research, one from psychology which reveals the unsettling truth behind hidden racial biases, and another from neuroscience that reveals how easily you transfer feelings of ownership from your familiar flesh onto inanimate objects and virtual-reality models.


Listen as Maister describes how she measured people’s implicit racial attitudes, and then reduced the strength of those unconscious, automatic, undesirable cognitive processes by mentally placing those same subjects in avatars designed to look like members of groups and subcultures to which the subjects did not belong. For those who would like to see less prejudice in the world, the results were equal parts encouraging and trippy.


Can changing your body, even just for a few minutes, change your mind? Can a psychological body transfer melt away your long-held opinions and unconscious prejudices? Maybe so. Learn more about this and other strange psychological phenomena as cognitive neuroscientist Lara Maister describes her unconventional experiments in the latest episode.


After the interview, I discuss a news story about scientists reducing prejudices and unconscious biases through mindfulness meditation.


In every episode, before I read a bit of self delusion news, I taste a cookie baked from a recipe sent in by a listener/reader. That listener/reader wins a signed copy of the book, “You Are Not So Smart,” and I post the recipe on the YANSS Pinterest page. This episode’s winner is Jeszica Rose who submitted a recipe for jeszicookies. Send your own recipes to david {at} youarenotsosmart.com.


jezsicookieLinks and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


Changing Bodies Changes Minds: Owning Another Body Affects Social Cognition


Lara Maister’s Research


White More Likely to Misidentify Tools as Guns When Linked to Black Faces 


Split-Second Decisions and Unintended Stereotyping


Looking Deathworthy

Perceived Stereotypicality of Black Defendants Predicts

Capital-Sentencing Outcomes


Drunk Tank Pink


Mindfulness Mitigates Biases You May Not Know You Have


Mindfulness Meditation Reduces Implicit Age and Race Bias


The Rubber Hand Illusion – Horizon: Is Seeing Believing?

 •  0 comments  •  flag
Share on Twitter
Published on January 28, 2015 11:13

January 26, 2015

YANSS at TEDx: Missing What is Missing


The very nice people at TEDx Jackson invited me to speak in November, and the nice people at TED just posted the video.


The talk is all about survivorship bias and how it skews our perception in the direction of the living, the winners, and the successful. A lifetime of that kind of filtering leads to a very biased view of the world.


TED described the talk like so,”Success stories are often used as templates while the most valuable lessons hide in the history of endeavors that did not end well.”


The talk was based on this post where you can find links to all the sources: Survivorship Bias.


You can buy the poster designed for the article and the talk from the artist at this link.


Thanks to David Pharr and Nina Parikh who organized the event along with several other dedicated and amazing people. It was Mississippi’s first TED. More details about the event can be found here: TEDxJackson – Fertile Ground

 •  0 comments  •  flag
Share on Twitter
Published on January 26, 2015 13:47

January 16, 2015

YANSS 041 – The Football Game that Split Reality and the Ceiling that Birthed a Naked Man

Screen Shot 2014-10-14 at 3.02.37 PMThe Topic: The Game/Ceiling Crasher


The Episode: DownloadiTunesStitcherRSSSoundcloud


ceilingcrasherhole


This episode is brought to you by Loot Crate. Get 10 percent off your subscription by visiting LootCrate.com/Smart and entering the code SMART.


And by The Great Courses. Order Behavioral Economics or another course in this special offer and get 80% off the original price.


In this episode, two stories, one about a football game that split reality in two for the people who witnessed it, and another about what happened when a naked man literally appeared out of thin air inside a couple’s apartment while they were getting ready for work.


In story one, you’ll learn how, in 1951, a brutal game of football between Dartmouth and Princeton launched the modern psychological investigation into preconceived notions, models of reality, and how no matter our similarities we each see a different version of the truth depending on the allegiances and alliances we form as adults.


In story two, Devon Laird was brushing his teeth one morning when he heard a loud crash. Moments later, underneath a gaping hole raining insulation, a naked stranger was adjusting furniture in Laird’s living room…right before opening the door and running away. There was no explanation afterward, but plenty of speculation. You’ll learn how the brain prevents unexplainable events like that from scrambling your reality by inventing plausible stories that allow you to move on with your life (and you’ll learn the bizarre truth behind the incident).


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


“Dartmouth Football: Season by Season Results: 1940-59.” Dartmouth Sports. Dartmouth College, 30 Aug. 2006. Web. Aug. 2012. Link.


Hastorf, Albert H., and Hadley Cantril. “They Saw a Game; a Case Study.” The Journal of Abnormal and Social Psychology 49.1 (1954): 129-34. Print.


Eagleman, D. (2011). Incognito: The secret lives of the brain. New York: Pantheon Books.


Maisel, Ivan. “1951 Heisman Winner Dick Kazmaier.” ESPN. ESPN Internet Ventures. Web. Aug. 2012. Link.


Simons, Daniel J., and Christopher F. Chabris. “What People Believe about How Memory Works: A Representative Survey of the U.S. Population.” Ed. Laurie Santos. PLoS ONE 6.8 (2011): E22757. Print.

 •  0 comments  •  flag
Share on Twitter
Published on January 16, 2015 10:13

January 8, 2015

YANSS 040 – Monkeys, Money, and The Primate Origins of Human Irrationality with Laurie Santos

The Topic: The Monkey Marketplace


The Guest: Laurie Santos


The Episode: DownloadiTunesStitcherRSSSoundcloud


Monkey Business


Lions love catnip.


They will roll around and lick and do all the things a house cat does when handed a toy filled with the psychedelic kitty-cat plant. Not all big cats are equally susceptible to the plant’s chemical powers, and within a single species some respond more than others, including house cats. I bet it’s a real bummer to learn your pet cat is immune to catnip, but that’s genetics for you.


This cross-species sharing of behaviors among cats goes beyond tripping balls after huffing exotic spices. Big cats from the wilderness, like jaguars and tigers and leopards, exhibit many of the same behaviors you see every day in tiny cats who live in human apartments and backyards around the world. That cute little kneading of the paws? Yep. That weird face rubbing thing. Same. If you’ve been to a zoo and watched big cats at play, you’ve probably noticed many similarities there as well. They share a common ancestor a few million years back, and some things got passed down to both lines in their bodies and in their brains. They aren’t identical though, natural selection tinkered with them separately and got different results, otherwise you’d see more people in the park walking pumas on leashes.


One of the most amazing things humans beings have figured out about the natural world is that all life on Earth is in the same family. If you had a magical photo album with the pictures of every one of your great grandparents, you would eventually flip over to pictures of fish. That’s true for all animals. Bears and eagles and alligators with similar albums would all land on that same photographs after enough page flips. That means a lion and a sea urchin share a common ancestor. Of course, they don’t seem all that similar on the surface. Sea urchins have never been all that popular in circus acts, and as far as I know, they have no special reaction to catnip. But they are related, deeply so, right down to shared genes and proteins and other bits and pieces. You can comfortably pet one and not the other because they’ve been evolving along separate paths for a very long time. Lions look and act a lot more like housecats than sea urchins because lions and tabbies share a more recent common ancestor.


Laurie SantosOur guest in this episode of the You Are Not So Smart Podcast is psychologist Laurie Santos who heads the Comparative Cognition Laboratory at Yale University. In that lab, she and her colleagues are exploring the fact that when two species share a relative on the evolutionary family tree, not only do they share similar physical features, but they also share similar behaviors. Psychologists and other scientists have used animals to study humans for a very long time, but Santos and her colleagues have taken it a step further by choosing to focus on a closer relation, the capuchin monkey; that way they could investigate subtler, more complex aspects of human decision making – like cognitive biases.


One of her most fascinating lines of research has come from training monkeys how to use money. That by itself is worthy of a jaw drop or two. Yes, monkeys can be taught how to trade tokens for food, and for years, Santos has observed capuchin monkeys attempting to solve the same sort of financial problems humans have attempted prior experiments, and what Santos and others have discovered is pretty amazing. Monkeys and humans seem to be prone to the same biases, and when it comes to money, they make the same kinds of mistakes.


Santos and her colleagues created something they call the monkey marketplace, an enclosure where those monkeys could comparison shop with their tokens. Inside, human merchants offered deals for grapes and apples, some better than others, some risky and some safe, and the tiny primates picked up on these factors, changing their behavior in exactly the same way as humans. In fact, Santos says that on paper, across many experiments, you can’t tell capuchins and humans apart.


In the interview you’ll learn how her research has led Santos and her team to suspect that many of our problem-solving behaviors are innate, passed down from a primate ancestor, and not wholly learned via culture or institutions. For some of the dumb things humans do, it seems we aren’t observing human irrationality, but primate irrationality. You’ll hear how this knowledge is important when it comes to building a better world and solving the problems that arise when we use those old primate strategies in new human institutions. You’ll also learn from journalist Daniel Luzer how lobster became fancy. Later in the show, we learn where science says you should sit in a high-school classroom if you want to become popular on campus.


In every episode, before I read a bit of self delusion news, I taste a cookie baked from a recipe sent in by a listener/reader. That listener/reader wins a signed copy of one of my books, and I post the recipe on the YANSS Pinterest page. This episode’s winner is Mary Gowing who submitted a recipe for Earl Grey cookies. Send your own recipes to david {at} youarenotsosmart.com.


 


EarlGrey


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


How Lobster Got Fancy


The Comparative Cognition Laboratory


Paper: How Basic Are Behavioral Biases? Evidence

from Capuchin Monkey Trading Behavior


Laurie Santos at TED


Does Catnip “Work” On Big Cats Like Lions And Tigers?


Do Big Cats Like Catnip?


11 Ways Big Cats Are Just Like Domestic Cats


There Was No First Human


Peer status and classroom seating arrangements: A social relations analysis


A child’s popularity is related to where the teacher seats them in the classroom


 


 


 

 •  0 comments  •  flag
Share on Twitter
Published on January 08, 2015 09:43

December 22, 2014

YANSS 039 – Unconscious learning, knowing without knowing, blind insight and other cognitive wonders with guest Ryan Scott

Screen Shot 2014-10-14 at 3.02.37 PM


The Topic: Blind Insight


The Guest: Ryan Scott


The Episode: DownloadiTunesStitcherRSSSoundcloud


celeb jeopardy


This episode is brought to you by Stamps.com – where is the fun in living in the future if you still have to go to the post office? Click on the microphone and enter “smart” for a $110 special offer.


This episode is also brought to you by Lynda, an easy and affordable way to help individuals and organizations learn. Try Lynda free for 7 days.


What is the capital of Bulgaria? If you don’t know, just take a guess. Seriously, any answer will be fine. Even Bolgonia – I won’t know, just say something so we can move on.


Ok, now, what is the capital of Italy? Are you sure about that?


Now take a moment and think about your own thinking. How confident are you right now that your guesses are correct? Very confident? What about being wrong? Can you feel an intuition about your own wrongness? If so, can you also feel the strength of that intuition? Maybe you don’t feel like one of your answers is a guess at all (especially if you live near Bulgaria). Maybe you feel that way about both answers. If you feel that way, how confident are you that you aren’t guessing and that you know for sure you know what you know and that you know what you know is a fact?


The guess, as a concept, is the fruit fly of cognitive science. Research into what goes on in your head when you guess has opened many doors, launched many explorations into how the brain works. It’s a perfect, simple, easy to produce metacognition. If you want to learn more about thinking about thinking, make people guess.


For instance, in studies where subjects are shown a photograph of two people and asked which person they find more attractive, people will reliably choose one photo over the other. The experimenter will then perform some sleight of hand and remove the photo the person picked and pretend that the photo left behind, the one the subject didn’t pick, is actually the one she said she preferred. Most people don’t notice, and if you then ask a subject why she picked that photo (again, she didn’t), she will begin describing all the ways that person is more attractive than the person in the photo that was removed. In situations like this, you are unaware that you are guessing and just making things up. You assume you know why you feel the way you feel and think the things you think, but this sort of research suggests it’s often just a guess – and you often don’t know you are guessing.


Another way brain and mind scientists play around with guesses is through a system called artificial grammar learning. In studies that use this system, subjects are asked to memorize strings of letters that seem nonsensical and random. Those same subjects then learn that the strings of letters weren’t gobbledygook. They actually adhered to rules of grammar invented by the scientists. Their task is then to look at new strings and say whether or not those letter combinations are grammatically correct. Even though the people in such studies don’t consciously know the rules at play, they are still able to pick out the strings of letters that obey the alien grammar at a rate much better than chance. When asked how confident they feel about those guesses, their confidence ratings usually line up with their correct guesses. Consciously, they have no idea how they are accomplishing this task, nor can they pinpoint the source of their confidence.


You felt this earlier. The capital of the Republic of Bulgaria? It’s Sofia. The capital of Italy is Rome. If you knew these things, ask yourself how you knew them. Notice the invisibility of the process vs. the clarity of its output. If you guessed, right or wrong, ask yourself about your metacognitions – what inside you was whispering to the conscious part of your mind, creating feelings of confidence or notions of doubt?


Ryan ScottOur guest in this episode of the You Are Not So Smart Podcast is Ryan Scott, a cognitive psychologist who is adding to a growing body of evidence revealing that our guesses and our confidence in those guesses don’t come from the same place in our minds, and separate still is our conscious awareness of these loops of thought feeding forward and back upon each other.


Scott and his colleagues recently uncovered a psychological phenomenon called blind insight, so named because they felt it was similar to neurological phenomenon known as blindsight. A person suffering from blindsight is unable to consciously see, but her eyes still transmit signals to her brain. Sufferers respond to smiles, turning up the corner of their lips and narrowing their eyes, but they aren’t consciously aware of why they are are doing so. The part of the brain that can still see is unable to speak, unable to communicate with the portion that can, but it still communicates with itself in other ways and with the body, and the portion of the brain that is conscious shares that body and notices the changes. If you ask a person with blindsight to navigate an obstacle course, she might tell you it’s impossible even though she can. She might later report a lack of confidence in her abilities even after successfully walking from one side of a room to the other and changing her path many times to avoid tripping. The portion of the brain that reports confidence is cut off from the knowledge that might alter her opinion.


In the podcast you will hear how Scott discovered something similar when he returned to the research using the alien grammar created by scientists. His team pulled aside people who seemed like peculiar outliers – they were terrible at picking out the new strings that adhered to the rules, but their confidence ratings were accurate. In other words, when they got it wrong, they seemed to know they got it wrong, but their intuition did them no favors while guessing. Something inside them seemed to know the answers, but that didn’t make them better at the task. How can that be? Scott explains in the interview. You’ll also hear why you should always guess if you don’t know the answers on a multiple choice test, and when you should go with your gut instead of your head.


In every episode, before I read a bit of self delusion news, I taste a cookie baked from a recipe sent in by a listener/reader. That listener/reader wins a signed copy of one of my books, and I post the recipe on the YANSS Pinterest page. This episode’s winner is Linda van Kerkhof who submitted a recipe for chocolate ginger crinkle cookies. Send your own recipes to david {at} youarenotsosmart.com.


Chocolate Ginger Crinkle Cookies


Screen Shot 2014-10-14 at 3.02.37 PM


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


Ryan Scott


Blind Insight: Metacognitive

Discrimination Despite Chance Task

Performance


Political Extremists May Be Less Susceptible to Common Cognitive Bias


David Eagleman: The human brain runs on conflict


Slim Goodbody


Lubbadubba

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on December 22, 2014 09:30

December 12, 2014

YANSS 038 – How the Halo Effect Turns Uncertainty into False Certainty

The Topic: The Halo Effect


The Episode: DownloadiTunesStitcherRSSSoundcloud


halo


This episode is brought to you by Stamps.com – where is the fun in living in the future if you still have to go to the post office? Click on the microphone and enter “smart” for a $110 special offer.


This episode is also brought to you by Lynda, an easy and affordable way to help individuals and organizations learn. Try Lynda free for 7 days.


This episode is also brought to you by Harry’s. Get $5 off the perfect holiday gift. Just go to Harrys.com and type in my coupon code SOSMART with your first purchase of quality shaving products.


It’s difficult to be certain of much in life.


Not only are you mostly uncertain of what will happen tomorrow, or next year, or in five years, but you often can’t be certain of the correct course of action, the best place for dinner, what kind of person you should be, or whether or not you should quit your job or move to a new city. At best, you are only truly certain of a handful of things at any given time, and aside from mathematical proofs – two apples plus two apples equals four apples (and even that, in some circles, can be debated) – you’ve become accustomed to living a life in a fog of maybes.


Most of what we now know about the world replaced something that we thought we knew about the world, but it turned out we had no idea what we were talking about. This is especially true in science, our best tool for getting to the truth. It’s a constantly churning sea of uncertainty. Maybe this, maybe that – but definitely not this, unless… Nothing raises a scientist’s brow more than a pocket of certainty because it’s usually a sign that someone is very wrong.


Being certain is a metacognition, a thought concerning another thought, and the way we often bungle that process is not exclusively human. When an octopus reaches out for a scallop, she does so because somewhere in the chaos of her nervous system a level of certainty crossed some sort of threshold, a threshold that the rock next to the scallop did not. Thanks to that certainty threshold, most of the time she bites into food instead of gravel. We too take the world into our brains through our senses, and in that brain we too are mostly successful at determining the difference between things that are food and things that are not food, but not always. There’s even a Japanese game show where people compete to determine whether household objects are real or are facsimiles made of chocolate. Seriously, check out the YouTube video of a man gleefully biting off a hunk of edible door handle. Right up until he smiles, he’s just rolling the dice, uncertain.


chocolate doorknob


Thanks to the sciences of the mind and brain we now know of several frames in which we might make judgments about the world. Of course, we already knew about this sort of thing in the days of our prescientific stupor. You don’t need a caliper and some Bayesian analysis to know that the same person might choose a different path when angry than she would when calm or that a person in love is likely to make decisions she may regret once released from that spell. You have a decently accurate intuition about those states of mind thanks to your exposure to many examples over the years, but behavioral sciences have dug much deeper. There are frames of mind your brain works to mask from the conscious portions of the self. One such frame of mind is uncertainty.


In psychology, uncertainty was made famous by the work of Daniel Kahneman and Amos Tversky. In their 1982 collection of research, “Judgments under Uncertainty,” the psychologists explained that when you don’t have enough information to make a clear judgment, or when you are making a decision concerning something too complex to fully grasp, instead of backing off and admitting your ignorance, you tend to instead push forward with confidence. The stasis of uncertainty never slows you down because human brains come equipped with anti-uncertainty mechanisms called heuristics.


In their original research they described how, while driving in a literal fog, it becomes difficult to judge the distance between your car and the other cars on the road. Landmarks, especially those deep in the mists, become more hazardous because they seem farther away than they actually are. This, they wrote, is because for your whole life you’ve noticed that things that are very far away appear a bit blurrier than things that are near. A lifetime of dealing with distance has reinforced a simple rule in your head: the closer an object the greater its clarity. This blurriness heuristic is almost always true, except underwater or on a foggy morning or on an especially clear day when it becomes incorrect in the other direction causing objects that are far away to seem much closer than normal.


Kahneman and Tversky originally identified three heuristics: representativeness, availability, and anchoring. Each one seems to help you solve the likelihood of something being true or the odds that one choice is better than another, without actually doing the work required to truly solve those problems. Here is an example of representativeness from their research. Imagine I tell you that a group of 30 engineers and 70 lawyers have applied for a job. I show you a single application that reveals a person who is great at math and bad with people, a person who loves Star Wars and hates public speaking, and then I ask whether it is more likely that this person is an engineer or a lawyer. What is your initial, gut reaction? What seems like the right answer? Statistically speaking, it is more likely the applicant is a lawyer. But if you are like most people in their research, you ignored the odds when checking your gut. You tossed the numbers out the window. So what if there is a 70 percent chance this person is a lawyer? That doesn’t feel like the right answer.


That’s what a heuristic is, a simple rule that in the currency of mental processes trades accuracy for speed. A heuristic can lead to a bias, and your biases, though often correct and harmless, can be dangerous when in error, resulting in a wide variety of bad outcomes from foggy morning car crashes to unconscious prejudices in job interviews.


For me, the most fascinating aspect of all of this is how it renders invisible the uncertainty that leads to the application of the heuristic. You don’t say to yourself, “Hmm, I’m not quite sure whether I am right or wrong, so let’s go with lawyer.” or, “Hmm, I don’t know how far away that car is, so let’s wait a second to hit the brake.” You just react, decide, judge, choose, etc. and move on, sometimes right, sometimes wrong, unaware – unconsciously crossing your fingers and hoping for the best.


These processes lead to a wonderful panoply of psychological phenomena. In this episode of the podcast we explore the halo effect, one of the ways this masking of uncertainty can really get you in trouble. When faced with a set of complex information, you tend to turn the volume down on the things that are difficult to quantify and evaluate and instead focus on the few things (sometimes the one thing) that is most tangible and concrete. You then use the way you feel about what is more-salient to determine how you feel about the things that are less-salient, even if the other traits are unrelated.


Here’s an example. In a study headed by psychologist Barry Staw in 1974, 60 business school students gathered together into three-person groups. Each group received the financial reports of a mid-sized company full of hard data for five years and a letter from the company’s president describing its prospects. The report was from 1969, the task for each group was to estimate the sales and earnings per share for that company in 1970. Since they had 1970’s data on hand, it would be a good exercise to see how much the students had learned in business school. The scientists told the business students that they had already run this experiment once on groups of five people, and that they wanted to see how smaller groups would perform on the same task. Of course, most of this wasn’t true. No matter what the students turned in, the scientists tossed it all out. Instead, each group received a randomly assigned grade. Some were told they did extremely well, and others were told they did very, very poorly.


What Staw discovered was that when the students were told they performed in the top 20 percent of all subjects, the people in the groups attributed that success to things like great communication, overall cohesiveness, openness to change, competence, a lack of conflict, and so on. In groups told that they performed in the bottom 20 percent, the story was just the opposite. They said they performed poorly because of a lack of communication, differences in ability, close-mindedness, sparks of conflict, and a variety of other confounding variables. They believed they had gained knowledge about the hazy characteristics of the group, but in reality they were simply using a measure of performance as a guide for creating attributions from thin air


In his book, “The Halo Effect,” Phil Rosenzweig described the Staw study like this, “…it’s hard to know in objective terms exactly what constitutes good communication or optimal cohesion…so people tend to make attributions based on other data that they believe are reliable.” That’s how the halo effect works – things like communication skills are weird, nebulous, abstract, and nuanced concepts that don’t translate well into quantifiable, concrete, and measurable aspects of reality. When you make a judgment under uncertainty your brain uses a heuristic and then covers up the evidence so that you never notice that you had no idea what you were doing. When asked to rate their communication skills, a not-so-salient trait, they looked for something more salient to go on. In this case it was the randomly assigned rating. That rating then became a halo whose light altered the way the students saw all the less-salient aspects of their experiences. The only problem was that the rating was a lie, and thus, so was each assessment.


Research into the halo effect suggests this sort of thing happens all the time. In one study a professor had a thick, Belgian accent. If that professor pretended to be mean and strict, American students said his accent was grating and horrendous. If he pretended to be nice and laid-back, similar students said his accent was beautiful and pleasant. In another study scientists wrote an essay and attached one of two photos to it, pretending that the photos were of the person who wrote the work. If the photo was of an attractive woman, people tended to rate the essay as being well-written and deep. If the photo was that of (according to the scientists) an unattractive woman, the essay received poorer scores and people tended to rate as being less insightful. In studies where teachers were told that a student had a learning disability they rated that student’s performance as weaker than did other teachers who were told nothing at all about the student before the assessment began. In each example, people didn’t realize they were using a small, chewable bite of reality to make assumptions about a smorgasbord they couldn’t fully digest.


As an anti-uncertainty mechanism, the halo effect doesn’t just render invisible your lack of insight, but it encourages you to go a step further. It turns uncertainty into false certainty. And, sure, philosophically speaking, just about all certainty is false certainty, but research into the halo effect suggests that whether or not you accept this, as a concept, as a truth – you rarely notice it in the moment when it actually matters.


Fire up the latest episode of the You Are Not So Smart Podcast to learn more about the halo effect, and as an added bonus you’ll hear an additional two-and-a-half hours of excerpts from my book, You Are Now Less Dumb, which is now available in paperback.


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


Business School Study


Judgment Under Uncertainty


You Are Now Less Dumb


The Halo Effect


The Halo Effect – Book

 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2014 12:51

YANSS 038 – The Power of Halos

The Topic: The Halo Effect


The Episode: DownloadiTunesStitcherRSSSoundcloud


halo


This episode is brought to you by Stamps.com – where is the fun in living in the future if you still have to go to the post office? Click on the microphone and enter “smart” for a $110 special offer.


This episode is also brought to you by Lynda, an easy and affordable way to help individuals and organizations learn. Try Lynda free for 7 days.


This episode is also brought to you by Harry’s. Get $5 off the perfect holiday gift. Just go to Harrys.com and type in my coupon code SOSMART with your first purchase of quality shaving products.


It’s difficult to be certain of much in life.


Not only are you mostly uncertain of what will happen tomorrow, or next year, or in five years, but you often can’t be certain of the correct course of action, the best place for dinner, what kind of person you should be, or whether or not you should quit your job or move to a new city. At best, you are only truly certain of a handful of things at any given time, and aside from mathematical proofs – two apples plus two apples equals four apples (and even that, in some circles, can be debated) – you’ve become accustomed to living a life in a fog of maybes.


Most of what we now know about the world replaced something that we thought we knew about the world, but it turned out we had no idea what we were talking about. This is especially true in science, our best tool for getting to the truth. It’s a constantly churning sea of uncertainty. Maybe this, maybe that – but definitely not this, unless… Nothing raises a scientist’s brow more than a pocket of certainty because it’s usually a sign that someone is very wrong.


Being certain is a metacognition, a thought concerning another thought, and the way we often bungle that process is not exclusively human. When an octopus reaches out for a scallop, she does so because somewhere in the chaos of her nervous system a level of certainty crossed some sort of threshold, a threshold that the rock next to the scallop did not. Thanks to that certainty threshold, most of the time she bites into food instead of gravel. We too take the world into our brains through our senses, and in that brain we too are mostly successful at determining the difference between things that are food and things that are not food, but not always. There’s even a Japanese game show where people compete to determine whether household objects are real or are facsimiles made of chocolate. Seriously, check out the YouTube video of a man gleefully biting off a hunk of edible door handle. Right up until he smiles, he’s just rolling the dice, uncertain.


Thanks to the sciences of the mind and brain we now know of several frames in which we might make judgments about the world. Of course, we already knew about this sort of thing in the days of our prescientific stupor. You don’t need a caliper and some Bayesian analysis to know that the same person might choose a different path when angry than she would when calm or that a person in love is likely to make decisions she may regret once released from that spell. You have a decently accurate intuition about those states of mind thanks to your exposure to many examples over the years, but behavioral sciences have dug much deeper. There are frames of mind your brain works to mask from the conscious portions of the self. One such frame of mind is uncertainty.


In psychology, uncertainty was made famous by the work of Daniel Kahneman and Amos Tversky. In their 1982 collection of research, “Judgments under Uncertainty,” the psychologists explained that when you don’t have enough information to make a clear judgment, or when you are making a decision concerning something too complex to fully grasp, instead of backing off and admitting your ignorance, you tend to instead push forward with confidence. The stasis of uncertainty never slows you down because human brains come equipped with anti-uncertainty mechanisms called heuristics.


In their original research they described how, while driving in a literal fog, it becomes difficult to judge the distance between your car and the other cars on the road. Landmarks, especially those deep in the mists, become more hazardous because they seem farther away than they actually are. This, they wrote, is because for your whole life you’ve noticed that things that are very far away appear a bit blurrier than things that are near. A lifetime of dealing with distance has reinforced a simple rule in your head: the closer an object the greater its clarity. This blurriness heuristic is almost always true, except underwater or on a foggy morning or on an especially clear day when it becomes incorrect in the other direction causing objects that are far away to seem much closer than normal.


Kahneman and Tversky originally identified three heuristics: representativeness, availability, and anchoring. Each one seems to help you solve the likelihood of something being true or the odds that one choice is better than another, without actually doing the work required to truly solve those problems. Here is an example of representativeness from their research. Imagine I tell you that a group of 30 engineers and 70 lawyers have applied for a job. I show you a single application that reveals a person who is great at math and bad with people, a person who loves Star Wars and hates public speaking, and then I ask whether it is more likely that this person is an engineer or a lawyer. What is your initial, gut reaction? What seems like the right answer? Statistically speaking, it is more likely the applicant is a lawyer. But if you are like most people in their research, you ignored the odds when checking your gut. You tossed the numbers out the window. So what if there is a 70 percent chance this person is a lawyer? That doesn’t feel like the right answer.


That’s what a heuristic is, a simple rule that in the currency of mental processes trades accuracy for speed. A heuristic can lead to a bias, and your biases, though often correct and harmless, can be dangerous when in error, resulting in a wide variety of bad outcomes from foggy morning car crashes to unconscious prejudices in job interviews.


For me, the most fascinating aspect of all of this is how it renders invisible the uncertainty that leads to the application of the heuristic. You don’t say to yourself, “Hmm, I’m not quite sure whether I am right or wrong, so let’s go with lawyer.” or, “Hmm, I don’t know how far away that car is, so let’s wait a second to hit the brake.” You just react, decide, judge, choose, etc. and move on, sometimes right, sometimes wrong, unaware – unconsciously crossing your fingers and hoping for the best.


These processes lead to a wonderful panoply of psychological phenomena. In this episode of the podcast we explore the halo effect, one of the ways this masking of uncertainty can really get you in trouble. When faced with a set of complex information, you tend to turn the volume down on the things that are difficult to quantify and evaluate and instead focus on the few things (sometimes the one thing) that is most tangible and concrete. You then use the way you feel about what is more-salient to determine how you feel about the things that are less-salient, even if the other traits are unrelated.


Here’s an example. In a study headed by psychologist Barry Staw in 1974, 60 business school students gathered together into three-person groups. Each group received the financial reports of a mid-sized company full of hard data for five years and a letter from the company’s president describing its prospects. The report was from 1969, the task for each group was to estimate the sales and earnings per share for that company in 1970. Since they had 1970’s data on hand, it would be a good exercise to see how much the students had learned in business school. The scientists told the business students that they had already run this experiment once on groups of five people, and that they wanted to see how smaller groups would perform on the same task. Of course, most of this wasn’t true. No matter what the students turned in, the scientists tossed it all out. Instead, each group received a randomly assigned grade. Some were told they did extremely well, and others were told they did very, very poorly.


What Staw discovered was that when the students were told they performed in the top 20 percent of all subjects, the people in the groups attributed that success to things like great communication, overall cohesiveness, openness to change, competence, a lack of conflict, and so on. In groups told that they performed in the bottom 20 percent, the story was just the opposite. They said they performed poorly because of a lack of communication, differences in ability, close-mindedness, sparks of conflict, and a variety of other confounding variables. They believed they had gained knowledge about the hazy characteristics of the group, but in reality they were simply using a measure of performance as a guide for creating attributions from thin air


In his book, “The Halo Effect,” Phil Rosenzweig described the Staw study like this, “…it’s hard to know in objective terms exactly what constitutes good communication or optimal cohesion…so people tend to make attributions based on other data that they believe are reliable.” That’s how the halo effect works – things like communication skills are weird, nebulous, abstract, and nuanced concepts that don’t translate well into quantifiable, concrete, and measurable aspects of reality. When you make a judgment under uncertainty your brain uses a heuristic and then covers up the evidence so that you never notice that you had no idea what you were doing. When asked to rate their communication skills, a not-so-salient trait, they looked for something more salient to go on. In this case it was the randomly assigned rating. That rating then became a halo whose light altered the way the students saw all the less-salient aspects of their experiences. The only problem was that the rating was a lie, and thus, so was each assessment.


Research into the halo effect suggests this sort of thing happens all the time. In one study a professor had a thick, Belgian accent. If that professor pretended to be mean and strict, American students said his accent was grating and horrendous. If he pretended to be nice and laid-back, similar students said his accent was beautiful and pleasant. In another study scientists wrote an essay and attached one of two photos to it, pretending that the photos were of the person who wrote the work. If the photo was of an attractive woman, people tended to rate the essay as being well-written and deep. If the photo was that of (according to the scientists) an unattractive woman, the essay received poorer scores and people tended to rate as being less insightful. In studies where teachers were told that a student had a learning disability they rated that student’s performance as weaker than did other teachers who were told nothing at all about the student before the assessment began. In each example, people didn’t realize they were using a small, chewable bite of reality to make assumptions about a smorgasbord they couldn’t fully digest.


As an anti-uncertainty mechanism, the halo effect doesn’t just render invisible your lack of insight, but it encourages you to go a step further. It turns uncertainty into false certainty. And, sure, philosophically speaking, just about all certainty is false certainty, but research into the halo effect suggests that whether or not you accept this, as a concept, as a truth – you rarely notice it in the moment when it actually matters.


Fire up the latest episode of the You Are Not So Smart Podcast to learn more about the halo effect, and as an added bonus you’ll hear an additional two-and-a-half hours of excerpts from my book, You Are Now Less Dumb, which is now available in paperback.


Links and Sources


DownloadiTunesStitcherRSSSoundcloud


Previous Episodes


Boing Boing Podcasts


Cookie Recipes


Business School Study


Judgment Under Uncertainty


You Are Now Less Dumb


The Halo Effect


The Halo Effect – Book


 

 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2014 12:51

David McRaney's Blog

David McRaney
David McRaney isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow David McRaney's blog with rss.