Michael Shermer's Blog, page 3
February 1, 2018
Alvy’s Error and the Meaning of Life

In a flashback scene in the 1977 film Annie Hall, Woody Allen’s character Alvy Singer is a depressed young boy who won’t do his homework because, as he explains to his doctor: “The universe is expanding…. Well, the universe is everything, and if it’s expanding, someday it will break apart, and that will be the end of everything.” His exasperated mother upbraids the youth: “What has the universe got to do with it?! You’re here in Brooklyn. Brooklyn is not expanding!”
Call it “Alvy’s Error”: assessing the purpose of something at the wrong level of analysis. The level at which we should assess our actions is the human timescale of days, weeks, months and years—our life span of fourscore plus or minus 10—not the billions of years of the cosmic calendar. It is a mistake made by theologians when arguing that without a source external to our world to vouchsafe morality and meaning, nothing really matters.
One of the most prominent theologians of our time, William Lane Craig, committed Alvy’s Error in a 2009 debate at Columbia University with Yale University philosopher Shelly Kagan when he pronounced: “On a naturalistic worldview, everything is ultimately destined to destruction in the heat death of the universe. As the universe expands, it grows colder and colder as its energy is used up. Eventually all the stars will burn out, all matter will collapse into dead stars and black holes, there will be no life, no heat, no light—only the corpses of dead stars and galaxies expanding into endless darkness. In light of that end, it’s hard for me to understand how our moral choices have any sort of significance. There’s no moral accountability. The universe is neither better nor worse for what we do. Our moral lives become vacuous because they don’t have that kind of cosmic significance.”
Kagan properly nailed Craig, referencing the latter’s example of godless torturers: “This strikes me as an outrageous thing to suggest. It doesn’t really matter? Surely it matters to the torture victims whether they’re being tortured. It doesn’t require that this make some cosmic difference to the eternal significance of the universe for it to matter whether a human being is tortured. It matters to them, it matters to their family, and it matters to us.”
Craig committed a related mistake when he argued that “without God there are no objective moral values, moral duties or moral accountability” and that “if life ends at the grave, then ultimately it makes no difference whether you live as a Stalin or a Mother Teresa.” Call this “Craig’s Categorical Error”: assessing the value of something by the wrong category of criteria. In my new book, recently published, Heavens on Earth, I debunk the common belief that without God and the promise of an afterlife, this life has no morality or meaning. We live in the here and now, not the hereafter, so our actions must be judged according to the criteria of this category, whether or not the category of a God-granted hereafter exists.
Whether you behave like a Soviet dictator who murdered tens of millions of people or a Roman Catholic missionary who tended to the poor matters very much to the victims of totalitarianism and poverty. Why does it matter? Because we are sentient beings designed by evolution to survive and flourish in the teeth of entropy and death. The second law of thermodynamics (entropy) is the first law of life. If you do nothing, entropy will take its course, and you will move toward a higher state of disorder that ends in death. So our most basic purpose in life is to combat entropy by doing something “extropic”—expending energy to survive and flourish. Being kind and helping others has been one successful strategy, and punishing Paleolithic Stalins was another, and from these actions, we evolved morality. In this sense, evolution bestowed on us a moral and purpose-driven life by dint of the laws of nature. We do not need any source higher than that to find meaning or morality.
In the long run, entropy will spell the end of everything in the universe and the universe itself, but we don’t live in the long run. We live now. We live in Brooklyn, so doing our homework matters. And so, too, does doing our duty to ourselves, our loved ones, our community, our species and our planet.
January 1, 2018
For the Love of Science

That conservatives doubt scientific findings and theories that conflict with their political and religious beliefs is evident from even a cursory scan of right-leaning media. The denial of evolution and of global warming and the pushback against stem cell research are the most egregious examples in recent decades. It is not surprising, because we expect those on the right to let their politics trump science—tantamount to a dog-bites-man story.
That liberals are just as guilty of antiscience bias comports more with accounts of humans chomping canines, and yet those on the left are just as skeptical of well-established science when findings clash with their political ideologies, such as with GMOs, nuclear power, genetic engineering and evolutionary psychology—skepticism of the last I call “cognitive creationism” for its endorsement of a blank-slate model of the mind in which natural selection operated on humans only from the neck down.
In reality, antiscience attitudes are formed in very narrow cognitive windows—those in which science appears to oppose certain political or religious views. Most people embrace most of science most of the time.
Who is skeptical of science, then, and when?
That question was the title of an October 2017 talk I attended by Asheley R. Landrum, a psychologist at Texas Tech University, who studies factors influencing the public understanding and perception of science, health and emerging technologies. She began by citing surveys that found more than 90 percent of both Republicans and Democrats agreed that “science and technology give more opportunities” and that “science makes our lives better.” She also reviewed modest evidence in support of the “knowledge deficit hypothesis,” which posits that public skepticism of science is the result of inadequate scientific knowledge. Those who know more about climate science, for example, are slightly more likely to accept that global warming is real and caused by humans than those who know less on the subject.
But that modest effect not only is erased when political ideology is factored in, it has an opposite effect on one end of the political spectrum. For Republicans, the more knowledge they have about climate science the less likely they are to accept the theory of anthropogenic global warming (whereas Democrats’ confidence goes up). “People with more knowledge only accept science when it doesn’t conflict with their preexisting beliefs and values,” Landrum explained. “Otherwise, they use that knowledge to more strongly justify their own positions.”
Landrum and her colleagues demonstrated the effect experimentally and reported the results in a 2017 paper in the Journal of Risk Research entitled “Culturally Antagonistic Memes and the Zika Virus: An Experimental Test,” in which participants read a news story on Zika public health risks that was linked to either climate change or immigration. Predictably, when Zika was connected to climate change, there was an increase in concern among Democrats and a decrease in concern among Republicans, but when Zika was associated with immigration, the effects were reversed. Skepticism, it would seem, is context-dependent. “We are good at being skeptical when information conflicts with our preexisting beliefs and values,” Landrum noted. “We are bad at being skeptical when information is compatible with our preexisting beliefs and values.”
In another 2017 study published in Advances in Political Psychology, “Science Curiosity and Political Information Processing,” Landrum and her colleagues found that liberal Democrats were far less likely than strong Republicans to voluntarily read a “surprising climate-skeptical story,” whereas a “surprising climate-concerned story” was far more likely to be read by those on the left than on the right. One encouraging mitigating factor was “science curiosity,” or the “motivation to seek out and consume scientific information for personal pleasure,” which “seems to counteract rather than aggravate the signature characteristics of politically motivated reasoning.”
The authors concluded that “individuals who have an appetite to be surprised by scientific information—who find it pleasurable to discover that the world does not work as they expected—do not turn this feature of their personality off when they engage political information but rather indulge it in that setting as well, exposing themselves more readily to information that defies their expectations about facts on contested issues. The result is that these citizens, unlike their less curious counterparts, react more open-mindedly and respond more uniformly across the political spectrum to the best available evidence.”
In other words, valuing science for pure pleasure is more of a bulwark against the politicization of science than facts alone.
December 22, 2017
Mr. Hume: Tear. Down. This. Wall.
This article appeared in Theology and Science in December 2017.
I am deeply appreciative that University of Cape Town professor George Ellis took the time to read carefully, think deeply, and respond thoughtfully to my Theology and Science paper “Scientific Naturalism: A Manifesto for Enlightenment Humanism” (August, 2017),1 itself an abbreviation of the full-throated defense of moral realism and moral progress that I present in my 2015 book, The Moral Arc.2 As a physicist he naturally reflects the methodologies of his field, wondering how a social scientist might “discover” moral laws in human nature as a physical scientist might discover natural laws in laboratory experiments. It’s a good question, as is his query: “Is it possible to say in some absolute sense that specific acts, such as the large scale massacres of the Holocaust, are evil in an absolute sense?”
Pace Abraham Lincoln, who famously said “If slavery is not wrong, then nothing is wrong,”3 I hereby declare in an unequivocal defense of moral realism:
If the Holocaust is not wrong, then nothing is wrong.
Since Professor Ellis is a physicist, let me approach this defense of moral realism from the perspective of a physical scientist. It is my hypothesis that in the same way that Galileo and Newton discovered physical laws and principles about the natural world that really are out there, so too have social scientists discovered moral laws and principles about human nature and society that really do exist. Just as it was inevitable that the astronomer Johannes Kepler would discover that planets have elliptical orbits—given that he was making accurate astronomical measurements, and given that planets really do travel in elliptical orbits, he could hardly have discovered anything else—scientists studying political, economic, social, and moral subjects will discover certain things that are true in these fields of inquiry. For example, that democracies are better than autocracies, that market economies are superior to command economies, that torture and the death penalty do not curb crime, that burning women as witches is a fallacious idea, that women are not too weak and emotional to run companies or countries, and, most poignantly here, that blacks do not like being enslaved and that the Jews do not want to be exterminated. Why? […]
To continue reading this article, download the PDF.
December 1, 2017
Outlawing War

After binge-watching the 18-hour PBS documentary series The Vietnam War, by Ken Burns and Lynn Novick, I was left emotionally emptied and ethically exhausted from seeing politicians in the throes of deception, self-deception and the sunk-cost bias that resulted in a body count totaling more than three million dead North and South Vietnamese civilians and soldiers, along with more than 58,000 American troops. With historical perspective, it is now evident to all but delusional ideologues that the war was an utter waste of human lives, economic resources, political capital and moral reserves. By the end, I concluded that war should be outlawed.
In point of fact, war was outlawed … in 1928. Say what?
In their history of how this happened, The Internationalists: How a Radical Plan to Outlaw War Remade the World (Simon & Schuster, 2017), Yale University legal scholars Oona A. Hathaway and Scott J. Shapiro begin with the contorted legal machinations of lawyers, legislators and politicians in the 17th century that made war, in the words of Prussian military theorist Carl von Clausewitz, “the continuation of politics by other means.” Those means included a license to kill other people, take their stuff and occupy their land. Legally. How?
In 1625 the renowned Dutch jurist Hugo Grotius penned a hundreds-page-long treatise originating with an earlier, similarly long legal justification for his country’s capture of the Portuguese merchant ship Santa Catarina when those two countries were in conflict over trading routes. In short, The Law of War and Peace argued that if individuals have rights that can be defended through courts, then nations have rights that can be defended through war because there was no world court.
As a consequence, nations have felt at liberty for four centuries to justify their bellicosity through “war manifestos,” legal statements outlining their “just causes” for “just wars.” Hathaway and Shapiro compiled more than 400 such documents into a database on which they conducted a content analysis. The most common rationalizations for war were self-defense (69 percent); enforcing treaty obligations (47 percent); compensation for tortious injuries (42 percent); violations of the laws of war or law of nations (35 percent); stopping those who would disrupt the balance of power (33 percent); and protection of trade interests (19 percent). These war manifestos are, in short, an exercise in motivated reasoning employing the confirmation bias, the hindsight bias and other cognitive heuristics to justify a predetermined end. Instead of “I came, I saw, I conquered,” these declarations read more like “I was just standing there minding my own business when he threatened me. I had to defend myself by attacking him.” The problem with this arrangement is obvious. Call it the moralization bias: the belief that our cause is moral and just and that anyone who disagrees is not just wrong but immoral.
In 1917, with the carnage of the First World War evident to all, a Chicago corporate lawyer named Salmon Levinson reasoned, “We should have, not as now, laws of war, but laws against war; just as there are no laws of murder or of poisoning, but laws against them.” With the championing of philosopher John Dewey and support of Foreign Minister Aristide Briand of France, Foreign Minister Gustav Stresemann of Germany and U.S. Secretary of State Frank B. Kellogg, Levinson’s dream of war outlawry came to fruition with the General Pact for the Renunciation of War (otherwise known as the Peace Pact or the Kellogg-Briand Pact), signed in Paris in 1928. War was outlawed.
Given the number of wars since, what happened? The moralization bias was dialed up to 11, of course, but there was also a lack of enforcement. That began to change after the ruinous Second World War, when the concept of “outcasting” took hold, the most common example being economic sanctions. “Instead of doing something to the rule breakers, Hathaway and Shapiro explain, “outcasters refuse to do something with the rule breakers.” This principle of exclusion doesn’t always work (Cuba, Russia), but sometimes it does (Turkey, Iran), and it is almost always better than war. The result, the researchers show, is that “interstate war has declined precipitously, and conquests have almost completely disappeared.”
Outcasting has yet to work with North Korea. But as tempting as a military response may be to some, given that country’s geography we might heed the words from Pete Seeger’s Vietnam War protest song: “We were waist deep in the Big Muddy/The big fool says to push on.” We know how that worked out.
November 1, 2017
What is the Secret of Success?

At a campaign rally in Roanoke, Va., before the 2012 election, President Barack Obama opined: “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life&8230; . Somebody invested in roads and bridges. If you’ve got a business—you didn’t build that. Somebody else made that happen.”
Although Obama was making a larger point about the power of collective action, such as building dams, power grids and the Internet, conservative heads exploded at the final sentiment. “I did build that!” is an understandable rejoinder to which I can relate. I research my books, edit my magazine, teach my courses and write these columns (this one is my 200th in a row for Scientific American). If I don’t make them happen, nobody else will.
But then I started thinking as a social scientist on the role of circumstance and luck in how lives turn out. It’s a sobering experience to realize just how many variables are out of our control:
The luck of being born in the first place—the ratio of how many people could have been born to those who actually were—is incalculably large, not to mention the luck of being born in a Western country with a stable political system, a sound economy and a solid infrastructure (roads and bridges) rather than, say, in a lower caste in India, or in war-torn Syria, or anarchic Somalia.
The luck of having loving and nurturing parents who raised you in a safe neighborhood and healthy environment, provided you with a high-quality K–12 education and instilled in you the values of personal responsibility. If they were financially successful, that’s an added bonus because a key predictor of someone’s earning power is that of their parents.
The luck of attending a college where you happened on good or inspiring professors or mentors who guided you to your calling, along with a strong peer cohort to challenge and support you, followed by finding a good-paying job or fulfilling career that matches your education, talents and interests.
The luck of being born at a time in history when your particular aptitudes and passions fit that of the zeitgeist. Would Google’s co-founders Larry Page and Sergey Brin be among the richest and most successful people in the world had they been born in 1873 instead of 1973? Both are brilliant and hardworking, so they would probably have been successful in any century—but at the equivalent of nearly $45 billion each? It seems unlikely.
What about intelligence and hard work? Surely they matter as much as luck. Yes, but decades of data from behavior genetics tell us that at least half of intelligence is heritable, as is having a personality high in openness to experience, conscientiousness and the need for achievement—all factors that help to shape success. The nongenetic components of aptitude, scrupulousness and ambition matter, too, of course, but most of those environmental and cultural variables were provided by others or circumstances not of your making. If you wake up in the morning full of vim and vigor, bounding out the door and into the world to take your shot, you didn’t choose to be that way. Then there is the problem of übersmart, creative, hardworking people who never prosper, so obviously there are additional factors that determine life outcomes, such as bad luck &8230; and bad choices.
Volition, too, must be considered in any evaluation of life outcomes, in the sense of knowing your strengths and weaknesses and selecting paths more likely to result in the desired effect. You can become aware of the internal and external influencing variables on your life—and aware of how you respond to them— and then make adjustments accordingly, however restrictive the degrees of freedom may be.
If the cosmic dice rolled in your favor, how should you feel? Modest pride in one’s hard work is no vice, but boastful arrogance at one’s good fortune is no virtue, so you should cultivate gratitude. What if you’ve been unlucky in life? There should be consolation in the fact that studies show that what is important in the long run is not success so much as living a meaningful life. And that is the result of having family and friends, setting longrange goals, meeting challenges with courage and conviction, and being true to yourself.
October 1, 2017
Sky Gods for Skeptics

In Star Trek V: The Final Frontier, Captain James T. Kirk encounters a deity that lures him to its planet in order to abscond with the Enterprise. “What does God need with a starship?” the skeptical commander inquires. I talked to Kirk himself—William Shatner, that is—about the film when I met him at a recent conference. The original plot device for the movie, which he directed, was for the crew to go “in search of God.” Fearful that some religious adherents might be offended that the Almighty could be discoverable by a spaceship, the studio bosses insisted that the deity be a malicious extraterrestrial impersonating God for personal gain.
How could a starship—or any technology designed to detect natural forces and objects—discover a supernatural God, who by definition would be beyond any such sensors? Any detectable entity would have to be a natural being, no matter how advanced, and as I have argued in this column [see “Shermer’s Last Law”; January 2002], “any sufficiently advanced extraterrestrial intelligence [ETI] is indistinguishable from God.” Thus, Shatner’s plot theme of looking for God could only turn up an ETI sufficiently advanced to appear God-like.
Perhaps herein lies the impulse to search. In his 1982 book Plurality of Worlds (Cambridge University Press), historian of science Steven J. Dick suggested that when Isaac Newton’s mechanical universe replaced the medieval spiritual world, it left a lifeless void that was filled with the modern search for ETI. In his 1995 book Are We Alone? (Basic Books), physicist Paul Davies wondered: “What I am more concerned with is the extent to which the modern search for aliens is, at rock-bottom, part of an ancient religious quest.” Historian George Basalla made a similar observation in his 2006 work Civilized Life in the Universe (Oxford University Press): “The idea of the superiority of celestial beings is neither new nor scientific. It is a widespread and old belief in religious thought.”
Now there is experimental evidence in support of this hypothesis, reported in a 2017 article entitled “We Are Not Alone” in the journal Motivation and Emotion, in which North Dakota State University psychologist Clay Routledge and his colleagues found an inverse relation between religiosity and ETI beliefs. That is, those who report low levels of religious belief but high desire for meaning show greater belief in ETIs. In study 1, subjects who read an essay “arguing that human life is ultimately meaningless and cosmically insignificant” were statistically significantly more likely to believe in ETIs than those who read an essay on the “limitations of computers.”
In study 2, subjects who self-identified as either atheist or agnostic were statistically significantly more likely to report believing in ETIs than those who reported being religious (primarily Christian). In studies 3 and 4, subjects completed a religiosity scale, a meaning in life scale, a well-being scale, an ETI belief scale, and a religious/supernatural belief scale. “Lower presence of meaning and higher search for meaning were associated with greater belief in ETI,” the researchers reported, but ETI beliefs showed no correlation with supernatural beliefs or well-being beliefs.
From these studies the authors conclude: “ETI beliefs serve an existential function: the promotion of perceived meaning in life. In this way, we view belief in ETI as serving a function similar to religion without relying on the traditional religious doctrines that some people have deliberately rejected.” By this they mean the supernatural: “accepting ETI beliefs does not require one to believe in supernatural forces or agents that are incompatible with a scientific understanding of the world.” If you don’t believe in God but seek deeper meaning outside our world, the thought that we are not alone in the universe “could make humans feel like they are part of a larger and more meaningful cosmic drama,” they observe.
Given that there is no more evidence for aliens than there is for God, believers in either one must take a leap of faith or else suspend judgment until evidence emerges to the contrary. I can conceive of what that might be for ETI but not for God, unless the deity is a sufficiently advanced ETI as to appear divine. Perhaps Captain Kirk has it right in his final reflections on God to the ship’s doctor at the end of Star Trek V: “Maybe He’s not out there, Bones. Maybe He’s right here [in the] human heart.”
September 1, 2017
Postmodernism vs. Science

In a 1946 essay in the London Tribune entitled “In Front of Your Nose,” George Orwell noted that “we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.”
The intellectual battlefields today are on college campuses, where students’ deep convictions about race, ethnicity, gender and sexual orientation and their social justice antipathy toward capitalism, imperialism, racism, white privilege, misogyny and “cissexist heteropatriarchy” have bumped up against the reality of contradictory facts and opposing views, leading to campus chaos and even violence. Students at the University of California, Berkeley, and outside agitators, for example, rioted at the mere mention that conservative firebrands Milo Yiannopoulos and Ann Coulter had been invited to speak (in the end, they never did). Middlebury College students physically attacked libertarian author Charles Murray and his liberal host, professor Allison Stanger, pulling her hair, twisting her neck and sending her to the ER.
One underlying cause of this troubling situation may be found in what happened at Evergreen State College in Olympia, Wash., in May, when biologist and self-identified “deeply progressive” professor Bret Weinstein refused to participate in a “Day of Absence” in which “white students, staff and faculty will be invited to leave the campus for the day’s activities.” Weinstein objected, writing in an e-mail: “on a college campus, one’s right to speak—or to be—must never be based on skin color.” In response, an angry mob of 50 students disrupted his biology class, surrounded him, called him a racist and insisted that he resign. He claims that campus police informed him that the college president told them to stand down, but he has been forced to stay off campus for his safety’s sake.
How has it come to this? One of many trends was identified by Weinstein in a Wall Street Journal essay: “The button-down empirical and deductive fields, including all the hard sciences, have lived side by side with ‘critical theory,’ postmodernism and its perception-based relatives. Since the creation in 1960s and ’70s of novel, justice-oriented fields, these incompatible worldviews have repelled one another.”
In an article for Quillette.com on “Methods Behind the Campus Madness,” graduate researcher Sumantra Maitra of the University of Nottingham in England reported that 12 of the 13 academics at U.C. Berkeley who signed a letter to the chancellor protesting Yiannopoulos were from “Critical theory, Gender studies and Post-Colonial/Postmodernist/Marxist background.” This is a shift in Marxist theory from class conflict to identity politics conflict; instead of judging people by the content of their character, they are now to be judged by the color of their skin (or their ethnicity, gender, sexual orientation, et cetera). “Postmodernists have tried to hijack biology, have taken over large parts of political science, almost all of anthropology, history and English,” Maitra concludes, “and have proliferated self-referential journals, citation circles, non-replicable research, and the curtailing of nuanced debate through activism and marches, instigating a bunch of gullible students to intimidate any opposing ideas.”
Students are being taught by these postmodern professors that there is no truth, that science and empirical facts are tools of oppression by the white patriarchy, and that nearly everyone in America is racist and bigoted, including their own professors, most of whom are liberals or progressives devoted to fighting these social ills. Of the 58 Evergreen faculty members who signed a statement “in solidarity with students” calling for disciplinary action against Weinstein for “endangering” the community by granting interviews in the national media, I tallied only seven from the sciences. Most specialize in English, literature, the arts, humanities, cultural studies, women’s studies, media studies, and “quotidian imperialisms, intermetropolitan geography [and] detournement.” A course called “Fantastic Resistances” was described as a “training dojo for aspiring ‘social justice warriors’ ” that focuses on “power asymmetries.”
If you teach students to be warriors against all power asymmetries, don’t be surprised when they turn on their professors and administrators. This is what happens when you separate facts from values, empiricism from morality, science from the humanities.
August 1, 2017
Are We All Racists?

Novelists often offer deep insights into the human psyche that take psychologists years to test. In his 1864 Notes from Underground, for example, Russian novelist Fyodor Dostoyevsky observed: “Every man has reminiscences which he would not tell to everyone, but only to his friends. He has other matters in his mind which he would not reveal even to his friends, but only to himself, and that in secret. But there are other things which a man is afraid to tell even to himself, and every decent man has a number of such things stored away in his mind.”
Intuitively, the observation rings true, but is it true experimentally? Twenty years ago social psychologists Anthony Greenwald, Mahzarin Banaji and Brian Nosek developed an instrument called the Implicit Association Test (IAT) that, they claimed, can read the innermost thoughts that you are afraid to tell even yourself. And those thoughts appear to be dark and prejudiced: we favor white over black, young over old, thin over fat, straight over gay, able over disabled, and more.
I took the test myself, as can you (Google “Project Implicit”). The race task first asks you to separate black and white faces into one of two categories: White people and Black people. Simple. Next you are asked to sort a list of words (joy, terrible, love, agony, peace, horrible, wonderful, nasty, and so on) into either Good or Bad buckets. Easy. Then the words and the black and white faces appear on the screen one at a time for you to sort into either Black people/Good or White people/Bad. The word “joy,” for example, would go into the first category, whereas a white face would go into the second category. This sorting becomes noticeably slower. Finally, you are tasked with sorting the words and faces into the categories White people/Good or Black people/Bad. Distressingly, I was much quicker to associate words like joy, love and pleasure with White people/Good than I was with Black people/Good.
The test’s assessment of me was not heartening: “Your data suggest a strong automatic preference for White people over Black people. Your result is described as ‘automatic preference for Black people over White people’ if you were faster responding when Black people and Good are assigned to the same response key than when White people and Good were classified with the same key. Your score is described as an ‘automatic preference for White people over Black people’ if the opposite occurred.” Does this mean I’m a closeted racist? And because most people, including African-Americans, score similarly to me on the IAT, does this mean we are all racists? The Project Implicit website suggests that it does: “Implicit biases can predict behavior. If we want to treat people in a way that reflects our values, then it is critical to be mindful of hidden biases that may influence our actions.”
I’m skeptical. First, unconscious states of mind are notoriously difficult to discern and require subtle experimental protocols to elicit. Second, associations between words and categories may simply be measuring familiar cultural or linguistic affiliations— associating “blue” and “sky” faster than “blue” and “doughnuts” does not mean I unconsciously harbor a pastry prejudice. Third, negative words have more emotional salience than positive words, so the IAT may be tapping into the negativity bias instead of prejudice. Fourth, IAT researchers have been unable to produce any interventions that can reduce the alleged prejudicial associations. A preprint of a 2016 meta-analysis by psychologist Patrick Forscher and his colleagues, made available on the Open Science Framework, examined 426 studies on 72,063 subjects and “found little evidence that changes in implicit bias mediate changes in explicit bias or behavior.” Fifth, the IAT does not predict prejudicial behavior. A 2013 meta-analysis by psychologist Frederick Oswald and his associates in the Journal of Personality and Social Psychology concluded that “the IAT provides little insight into who will discriminate against whom.”
For centuries the arc of the moral universe has been bending toward justice as a result of changing people’s explicit behaviors and beliefs, not on ferreting out implicit prejudicial witches through the spectral evidence of unconscious associations. Although bias and prejudice still exist, they are not remotely as bad as a mere half a century ago, much less half a millennium ago. We ought to acknowledge such progress and put our energies into figuring out what we have been doing right and do more of it.
July 1, 2017
Who Are You?

The Discovery is a 2017 Netflix film in which Robert Redford plays a scientist who proves that the afterlife is real. “Once the body dies, some part of our consciousness leaves us and travels to a new plane,” the scientist explains, evidenced by his machine that measures, as another character puts it, “brain wavelengths on a subatomic level leaving the body after death.”
This idea is not too far afield from a real theory called quantum consciousness, proffered by a wide range of people, from physicist Roger Penrose to physician Deepak Chopra. Some versions hold that our mind is not strictly the product of our brain and that consciousness exists separately from material substance, so the death of your physical body is not the end of your conscious existence. Because this is the topic of my next book, Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia (Henry Holt, 2018), the film triggered a number of problems I have identified with all such concepts, both scientific and religious.
First, there is the assumption that our identity is located in our memories, which are presumed to be permanently recorded in the brain: if they could be copied and pasted into a computer or duplicated and implanted into a resurrected body or soul, we would be restored. But that is not how memory works. Memory is not like a DVR that can play back the past on a screen in your mind. Memory is a continually edited and fluid process that utterly depends on the neurons in your brain being functional. It is true that when you go to sleep and wake up the next morning or go under anesthesia for surgery and come back hours later, your memories return, as they do even after socalled profound hypothermia and circulatory arrest. Under this procedure, a patient’s brain is cooled to as low as 50 degrees Fahrenheit, which causes electrical activity in neurons to stop— suggesting that long-term memories are stored statically. But that cannot happen if your brain dies. That is why CPR has to be done so soon after a heart attack or drowning because if the brain is starved of oxygen-rich blood, the neurons die, along with the memories stored therein.
Second, there is the supposition that copying your brain’s connectome—the diagram of its neural connections—uploading it into a computer (as some scientists suggest) or resurrecting your physical self in an afterlife (as many religions envision) will result in you waking up as if from a long sleep either in a lab or in heaven. But a copy of your memories, your mind or even your soul is not you. It is a copy of you, no different than a twin, and no twin looks at his or her sibling and thinks, “There I am.” Neither duplication nor resurrection can instantiate you in another plane of existence.
Third, your unique identity is more than just your intact memories; it is also your personal point of view. Neuroscientist Kenneth Hayworth, a senior scientist at the Howard Hughes Medical Institute and president of the Brain Preservation Foundation, divided this entity into the MEMself and the POVself. He believes that if a complete MEMself is transferred into a computer (or, presumably, resurrected in heaven), the POVself will awaken. I disagree. If this were done without the death of the person, there would be two memory selves, each with its own POVself looking out at the world through its unique eyes. At that moment, each would take a different path in life, thereby recording different memories based on different experiences. “You” would not suddenly have two POVs. If you died, there is no known mechanism by which your POVself would be transported from your brain into a computer (or a resurrected body). A POV depends entirely on the continuity of self from one moment to the next, even if that continuity is broken by sleep or anesthesia. Death is a permanent break in continuity, and your personal POV cannot be moved from your brain into some other medium, here or in the hereafter.
If this sounds dispiriting, it is just the opposite. Awareness of our mortality is uplifting because it means that every moment, every day and every relationship matters. Engaging deeply with the world and with other sentient beings brings meaning and purpose. We are each of us unique in the world and in history, geographically and chronologically. Our genomes and connectomes cannot be duplicated, so we are individuals vouchsafed with awareness of our mortality and self-awareness of what that means. What does it mean? Life is not some temporary staging before the big show hereafter—it is our personal proscenium in the drama of the cosmos here and now.
June 1, 2017
Romance of the Vanished Past

Graham Hancock is an audacious autodidact who believes that long before ancient Mesopotamia, Babylonia and Egypt there existed an even more glorious civilization. One so thoroughly wiped out by a comet strike around 12,000 years ago that nearly all evidence of its existence vanished, leaving only the faintest of traces, including, Hancock thinks, a cryptic warning that such a celestial catastrophe could happen to us. All this is woven into a narrative entitled Magicians of the Gods (Thomas Dunne Books, 2015). I listened to the audio edition read by the author, whose British accent and breathless, revelatory storytelling style are confessedly compelling. But is it true? I’m skeptical.
First, no matter how devastating an extraterrestrial impact might be, are we to believe that after centuries of flourishing every last tool, potsherd, article of clothing, and, presumably from an advanced civilization, writing, metallurgy and other technologies— not to mention trash—was erased? Inconceivable.
Second, Hancock’s impact hypothesis comes from scientists who first proposed it in 2007 as an explanation for the North American megafaunal extinction around that time and has been the subject of vigorous scientific debate. It has not fared well. In addition to the lack of any impact craters determined to have occurred around that time anywhere in the world, the radiocarbon dates of the layer of carbon, soot, charcoal, nanodiamonds, microspherules and iridium, asserted to have been the result of this catastrophic event, vary widely before and after the megafaunal extinction, anywhere from 14,000 to 10,000 years ago. Further, although 37 mammal genera went extinct in North America (while most other species survived and flourished), at the same time 52 mammal genera went extinct in South America, presumably not caused by the impact. These extinctions, in fact, were timed with human arrival, thereby supporting the more widely accepted overhunting hypothesis.
Third, Hancock grounds his case primarily in the argument from ignorance (because scientists cannot explain X, then Y is a legitimate theory) or the argument from personal incredulity (because I cannot explain X, then my Y theory is valid). This is the type of “God of the gaps” reasoning that creationists employ, only in Hancock’s case the gods are the “magicians” who brought us civilization. The problem here is twofold: (1) scientists do have good explanations for Hancock’s X’s (for example, the pyramids, the Great Sphinx), even if they are not in total agreement, and (2) ultimately one’s theory must rest on positive evidence in favor of it, not just negative evidence against accepted theories.
Hancock’s biggest X is Göbekli Tepe in Turkey, with its megalithic, T-shaped seven- to 10-ton stone pillars cut and hauled from limestone quarries and dated to around 11,000 years ago, when humans lived as hunter-gatherers without, presumably, the know-how, skills and labor to produce them. Ergo, Hancock concludes, “at the very least it would mean that some as yet unknown and unidentified people somewhere in the world, had already mastered all the arts and attributes of a high civilization more than twelve thousand years ago in the depths of the last Ice Age and had sent out emissaries around the world to spread the benefits of their knowledge.” This sounds romantic, but it is the bigotry of low expectations. Who is to say what hunter- gatherers are or are not capable of doing? Plus, Göbekli Tepe was a ceremonial religious site, not a city—there is no evidence that anyone lived there. Moreover, there are no domesticated animal bones, no metal tools, no inscriptions or writing, and not even pottery—all products that much later “high civilizations” produced.
Fourth, Hancock has spent decades in his vision quest to find the sages who brought us civilization. Yet decades of searching have failed to produce enough evidence to convince archaeologists that the standard timeline of human history needs major revision. Hancock’s plaint is that mainstream science is stuck in a uniformitarian model of slow, gradual change and so cannot accept a catastrophic explanation.
Not true. From the origin of the universe (big bang), to the origin of the moon (big collision), to the origin of lunar craters (meteor strikes), to the demise of the dinosaurs (asteroid impact), to the numerous sudden downfalls of civilizations documented by Jared Diamond in his 2005 book Collapse, catastrophism is alive and well in mainstream science. The real magicians are the scientists who have worked this all out.
Michael Shermer's Blog
- Michael Shermer's profile
- 1146 followers
