Michael Shermer's Blog, page 6
June 1, 2016
Death Wish

Between December 7, 1982, and February 16, 2016, the state of Texas executed 534 inmates, 417 of whom issued a last statement. This January in the journal Frontiers in Psychology, psychologists Sarah Hirschmüller and Boris Egloff, both at Johannes Gutenberg University Mainz in Germany, published the results of their evaluation of most of the statements, which they put through a computerized text-analysis program called the Linguistic Inquiry and Word Count. The biggest finding was a statistically significant difference between the average percentage of positive emotion words (9.64) and negative ones (2.65). Is that a lot?
To find out, the psychologists compared this dataset with a broad spectrum of written sources, including scientific articles, novels, blogs and diaries, consisting of more than 168 million words composed by 23,173 people. The mean of 2.74 percent positive emotion words for each entry was statistically significantly lower than that of the prisoners. In fact, these death-row inmates were more positive than students asked to contemplate their own death and write down their thoughts and even more positive than people who attempted or completed suicides and left notes. What does this mean?
Hirschmüller and Egloff contend that their data support terror management theory (TMT), which asserts that the realization of our mortality leads to unconscious terror, and “an increased use of positive emotion words serves as a way to protect and defend against mortality salience of one’s own contemplated death.” But if that were so, then why the difference between the inmates’ statements and those of suicide attempters and completers? Surely those about to kill themselves would be equally terrorized by the prospect of their imminent self-demise.
Context is key here. “Change the context slightly, and one often gets very different results in research on human behavior,” University of California, Berkeley, psychologist Frank J. Sulloway told me by e-mail when I queried him about TMT. “The really tricky thing with theories like this is not what to do with statistical refutations but rather what to do with supposed statistical confirmations. This problem previously arose in connection with psychoanalysis, and [German-born psychologist] Hans Eysenck and others later wrote books showing that those zealous psychoanalytic devotees testing their psychoanalytic claims systematically failed to consider what other theories, besides the one researchers thought they were testing, would also be confirmed by the same evidence.”
An alternative to TMT is one we might call emotional priority theory (EPT). Facing death focuses one’s mind on the most important emotions in life, two of which are love and forgiveness. Love is an emotional feature of human nature so potent it can be tracked with neurochemical correlates such as oxytocin and dopamine. In fact, as Rutgers University anthropologist Helen Fisher argues in the revised edition of Anatomy of Love (W. W. Norton, 2016), love is so powerful an emotion it can be addictive, like chocolate and cocaine.
In this alternative context of EPT, I conducted my own content analysis of all 417 death-row final statements. I found that 44 percent either apologized for their crimes or asked for forgiveness from the families present at the execution and that 70 percent included effusive love language. For example:
To my family, to my mom, I love you.
I appreciate everybody for their love and support. You all keep strong, thank you for showing me love and teaching me how to love.
I want to tell my sons I love them; I have always loved them.
I would like to extend my love to my family members and my relatives for all of the love and support you have showed me.
As the ocean always returns to itself, love always returns to itself.
Not only were these men not terrorized at the prospects of death, 40 percent of them said they were looking forward to the next life in expressions like “going home,” “going to a better place” and “I’ll be there waiting for you.” TMT proponents counter that the terror is unconscious, revealed by expressions of positive emotions and afterlife beliefs. But is it not more prudent to presume that people say what they truly feel and believe in the seconds before their death and then prioritize those emotions and thoughts by what matters most? What would you say?
May 1, 2016
Malthusian Menace

If by fiat I had to identify the most consequential ideas in the history of science, good and bad, in the top 10 would be the 1798 treatise An Essay on the Principle of Population by English political economist Thomas Robert Malthus. On the positive side of the ledger, it inspired Charles Darwin and Alfred Russel Wallace to work out the mechanics of natural selection based on Malthus’s observation that populations tend to increase geometrically (2, 4, 8, 16…), whereas food reserves grow arithmetically (2, 3, 4, 5…), leading to competition for scarce resources and differential reproductive success, the driver of evolution.
On the negative side of the ledger are the policies derived from the belief in the inevitability of a Malthusian collapse. “The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race,” Malthus gloomily predicted. His scenario influenced policy makers to embrace social Darwinism and eugenics, resulting in draconian measures to restrict particular populations’ family size, including forced sterilizations.
In his book The Evolution of Everything (Harper, 2015), evolutionary biologist and journalist Matt Ridley sums up the policy succinctly: “Better to be cruel to be kind.” The belief that “those in power knew best what was good for the vulnerable and weak” led directly to legal actions based on questionable Malthusian science. For example, the English Poor Law implemented by Queen Elizabeth I in 1601 to provide food to the poor was severely curtailed by the Poor Law Amendment Act of 1834, based on Malthusian reasoning that helping the poor only encourages them to have more children and thereby exacerbate poverty. The British government had a similar Malthusian attitude during the Irish potato famine of the 1840s, Ridley notes, reasoning that famine, in the words of Assistant Secretary to the Treasury Charles Trevelyan, was an “effective mechanism for reducing surplus population.” A few decades later Francis Galton advocated marriage between the fittest individuals (“What nature does blindly, slowly, and ruthlessly man may do providently, quickly and kindly”), followed by a number of prominent socialists such as Sidney and Beatrice Webb, George Bernard Shaw, Havelock Ellis and H. G. Wells, who openly championed eugenics as a tool of social engineering.
We think of eugenics and forced sterilization as a right-wing Nazi program implemented in 1930s Germany. Yet as Princeton University economist Thomas Leonard documents in his book Illiberal Reformers (Princeton University Press, 2016) and former New York Times editor Adam Cohen reminds us in his book Imbeciles (Penguin, 2016), eugenics fever swept America in the early 20th century, culminating in the 1927 Supreme Court case Buck v. Bell, in which the justices legalized sterilization of “undesirable” citizens. The court included prominent progressives Louis Brandeis and Oliver Wendell Holmes, Jr., the latter of whom famously ruled, “Three generations of imbeciles are enough.” The result: sterilization of some 70,000 Americans.
Science writer Ronald Bailey tracks neo-Malthusians in his book The End of Doom (St. Martin’s Press, 2015), starting with Paul Ehrlich’s 1968 best seller The Population Bomb, which proclaimed that “the battle to feed all of humanity is over.” Many doomsayers followed. Worldwatch Institute founder Lester Brown, for example, declared in 1995, “Humanity’s greatest challenge may soon be just making it to the next harvest.” In a 2009 Scientific American article he affirmed his rhetorical question, “Could food shortages bring down civilization?” In a 2013 conference at the University of Vermont, Ehrlich assessed our chances of avoiding civilizational collapse at only 10 percent.
The problem with Malthusians, Bailey writes, is that they “cannot let go of the simple but clearly wrong idea that human beings are no different than a herd of deer when it comes to reproduction.” Humans are thinking animals. We find solutions—think Norman Borlaug and the Green Revolution. The result is the opposite of what Malthus predicted: the wealthiest nations with the greatest food security have the lowest fertility rates, whereas the most food insecure countries have the highest fertility rates.
The solution to overpopulation is not to force people to have fewer children. China’s one-child policy showed the futility of that experiment. It is to raise the poorest nations out of poverty through democratic governance, free trade, access to birth control, and the education and economic empowerment of women.
Doomsday Dumb

If by fiat I had to identify the most consequential ideas in the history of science, good and bad, in the top 10 would be the 1798 treatise An Essay on the Principle of Population by English political economist Thomas Robert Malthus. On the positive side of the ledger, it inspired Charles Darwin and Alfred Russel Wallace to work out the mechanics of natural selection based on Malthus’s observation that populations tend to increase geometrically (2, 4, 8, 16…), whereas food reserves grow arithmetically (2, 3, 4, 5…), leading to competition for scarce resources and differential reproductive success, the driver of evolution.
On the negative side of the ledger are the policies derived from the belief in the inevitability of a Malthusian collapse. “The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race,” Malthus gloomily predicted. His scenario influenced policy makers to embrace social Darwinism and eugenics, resulting in draconian measures to restrict particular populations’ family size, including forced sterilizations.
In his book The Evolution of Everything (Harper, 2015), evolutionary biologist and journalist Matt Ridley sums up the policy succinctly: “Better to be cruel to be kind.” The belief that “those in power knew best what was good for the vulnerable and weak” led directly to legal actions based on questionable Malthusian science. For example, the English Poor Law implemented by Queen Elizabeth I in 1601 to provide food to the poor was severely curtailed by the Poor Law Amendment Act of 1834, based on Malthusian reasoning that helping the poor only encourages them to have more children and thereby exacerbate poverty. The British government had a similar Malthusian attitude during the Irish potato famine of the 1840s, Ridley notes, reasoning that famine, in the words of Assistant Secretary to the Treasury Charles Trevelyan, was an “effective mechanism for reducing surplus population.” A few decades later Francis Galton advocated marriage between the fittest individuals (“What nature does blindly, slowly, and ruthlessly man may do providently, quickly and kindly”), followed by a number of prominent socialists such as Sidney and Beatrice Webb, George Bernard Shaw, Havelock Ellis and H. G. Wells, who openly championed eugenics as a tool of social engineering.
We think of eugenics and forced sterilization as a right-wing Nazi program implemented in 1930s Germany. Yet as Princeton University economist Thomas Leonard documents in his book Illiberal Reformers (Princeton University Press, 2016) and former New York Times editor Adam Cohen reminds us in his book Imbeciles (Penguin, 2016), eugenics fever swept America in the early 20th century, culminating in the 1927 Supreme Court case Buck v. Bell, in which the justices legalized sterilization of “undesirable” citizens. The court included prominent progressives Louis Brandeis and Oliver Wendell Holmes, Jr., the latter of whom famously ruled, “Three generations of imbeciles are enough.” The result: sterilization of some 70,000 Americans.
Science writer Ronald Bailey tracks neo-Malthusians in his book The End of Doom (St. Martin’s Press, 2015), starting with Paul Ehrlich’s 1968 best seller The Population Bomb, which proclaimed that “the battle to feed all of humanity is over.” Many doomsayers followed. Worldwatch Institute founder Lester Brown, for example, declared in 1995, “Humanity’s greatest challenge may soon be just making it to the next harvest.” In a 2009 Scientific American article he affirmed his rhetorical question, “Could food shortages bring down civilization?” In a 2013 conference at the University of Vermont, Ehrlich assessed our chances of avoiding civilizational collapse at only 10 percent.
The problem with Malthusians, Bailey writes, is that they “cannot let go of the simple but clearly wrong idea that human beings are no different than a herd of deer when it comes to reproduction.” Humans are thinking animals. We find solutions—think Norman Borlaug and the Green Revolution. The result is the opposite of what Malthus predicted: the wealthiest nations with the greatest food security have the lowest fertility rates, whereas the most food insecure countries have the highest fertility rates.
The solution to overpopulation is not to force people to have fewer children. China’s one-child policy showed the futility of that experiment. It is to raise the poorest nations out of poverty through democratic governance, free trade, access to birth control, and the education and economic empowerment of women.
April 1, 2016
Hooey. Drivel. Baloney…

Babble, bafflegab, balderdash, bilge, blabber, blarney, blather, bollocks, bosh, bunkum. These are a few of the synonyms (from just the b’s) for the demotic descriptor BS (as commonly abbreviated). The Oxford English Dictionary equates it with “nonsense.” In his best-selling 2005 book on the subject, Princeton University philosopher Harry Frankfurt famously distinguished BS from lying: “It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.” BS may or may not be true, but its “truthiness” (in comedian Stephen Colbert’s famous neologism) is meant to impress through obfuscation—that is, by saying something that sounds profound but may be nonsense.
Example: “Attention and intention are the mechanics of manifestation.” This is an actual tweet composed by Deepak Chopra, as quoted by University of Waterloo psychologist Gordon Pennycook and his colleagues in a paper published in the November 2015 issue of Judgment and Decision Making. The scientists set out to determine “the factors that predispose one to become or to resist becoming” a victim of what they called “pseudo-profound” BS, or language “constructed to impress upon the reader some sense of profundity at the expense of a clear exposition of meaning or truth.” I was cited in the paper for describing Chopra’s language as “woo-woo nonsense.” For instance, in a 2010 debate we had at the California Institute of Technology that was televised on ABC’s Nightline, in the audience Q&A, Chopra defines consciousness as “a superposition of possibilities,” to which physicist Leonard Mlodinow replies: “I know what each of those words mean. I still don’t think I know….”
Chopra’s definition of consciousness certainly sounds like pseudo-profundity, but I have since gotten to know him and can assure readers that he doesn’t create such phrases to intentionally obscure meaning. He believes that quantum physics explains consciousness, so invoking terms from that field makes sense in his mind, even though to those not so inclined, much of what he says sounds like, well, BS.
These are examples of what cognitive psychologist Dan Sperber meant when he wrote in “The Guru Effect,” a 2010 article in the Review of Philosophy and Psychology: “All too often, what readers do is judge profound what they have failed to grasp.” To find out if some people are more or less inclined to accept BS as legit based on their ability (or lack thereof) to grasp language (or lack thereof), Pennycook et al. began by distinguishing two types of thinking: one, intuitive—rapid and automatic cognition—and, two, reflective—slower and effortful cognition. Type 1 thinking makes us vulnerable to BS because it takes time and effort to think (and say), “I know what each of those words mean. I still don’t think I know….” Pennycook and his team tested the hypothesis that higher intelligence and a superior analytical cognitive style (analyticity) leads to a greater capacity to detect and reject pretentious BS. Employing standard measures of intelligence (for example, the Wordsum test) and analyticity (for example, the Cognitive Reflection Test), the psychologists presented subjects with a number of meaningless statements produced by the New Age Bullshit Generator, such as “We are in the midst of a self-aware blossoming of being that will align us with the nexus itself” and “Today, science tells us that the essence of nature is joy.”
In four studies on more than 800 subjects, the authors found that the higher the intelligence and analyticity of subjects, the less likely they were to rate such statements as profound. Conversely, and revealingly, they concluded that those most receptive to pseudo-profound BS are also more prone to “conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine.” Apropos of one of this column’s skeptical leitmotifs, detecting BS, according to the authors, “is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims.”
Skepticism should never be indiscriminate and should always be discerning of a claim’s verisimilitude based on evidence and logic, regardless of language. But language matters, so it is incumbent on us all to transduce our neuro-phonemic excitatory action potentials into laconic phonological resonances unencumbered by extraneous and obfuscating utterances. And that’s no BS.
March 1, 2016
Left Behind

In the past couple of years imbroglios erupted on college campuses across the U.S. over trigger warnings (for example, alerting students to scenes of abuse and violence in The Great Gatsby before assigning it), microaggressions (saying “I believe the most qualified person should get the job”), cultural appropriation (a white woman wearing her hair in cornrows), speaker disinvitations (Brandeis University canceling plans to award Ayaan Hirsi Ali an honorary degree because of her criticism of Islam’s treatment of women), safe spaces (such as rooms where students can go after a talk that upset them), and social justice advocates competing to signal their moral outrage over such issues as Halloween costumes (for example, at Yale University). Why such unrest in the most liberal institutions in the country?
Although there are many proximate causes, there is but one ultimate cause—lack of political diversity to provide checks on protests going too far. A 2014 study conducted by the University of California, Los Angeles, Higher Education Research Institute found that 59.8 percent of all undergraduate faculty nationwide identify as far left or liberal, compared with only 12.8 percent as far right or conservative. The asymmetry is much worse in the social sciences. A 2015 study by psychologist José Duarte, then at Arizona State University, and his colleagues in Behavioral and Brain Sciences, entitled “Political Diversity Will Improve Social Psychological Science,” found that 58 to 66 percent of social scientists are liberal and only 5 to 8 percent conservative and that there are eight Democrats for every Republican. And the problem is most relavent to the study of areas “related to the political concerns of the Left— areas such as race, gender, stereotyping, environmentalism, power, and inequality.” The very things these students are protesting.
How does this political asymmetry corrupt social science? It begins with what subjects are studied and the descriptive language employed. Consider a 2003 paper by social psychologist John Jost, now at New York University, and his colleagues, entitled “Political Conservatism as Motivated Social Cognition.” Conservatives are described as having “uncertainty avoidance,” “needs for order, structure, and closure,” and “dogmatism and intolerance of ambiguity,” as if these constitute a mental disease that leads to “resistance to change” and “endorsement of inequality.” Yet one could just as easily characterize liberals as suffering from a host of equally malfunctioning cognitive states: a lack of moral compass that leads to an inability to make clear ethical choices, a pathological fear of clarity that leads to indecisiveness, a naive belief that all people are equally talented, and a blind adherence in the teeth of contradictory evidence from behavior genetics that culture and environment exclusively determine one’s lot in life.
Duarte et al. find similar distortive language across the social sciences, where, for instance, certain words are used to suggest pernicious motives when confronting contradictory evidence— “deny,” “legitimize,” “rationalize,” “justify,” “defend,” “trivialize”— with conservatives as examples, as if liberals are always objective and rational. In one test item, for example, the “endorsement of the efficacy of hard work” was interpreted as an example of “rationalization of inequality.” Imagine a study in which conservative values were assumed to be scientific facts and disagreement with them was treated as irrational, the authors conjecture counterfactually. “In this field, scholars might regularly publish studies on … ‘the denial of the benefits of a strong military’ or ‘the denial of the benefits of church attendance.’ ” The authors present evidence that “embedding any type of ideological values into measures is dangerous to science” and is “much more likely to happen—and to go unchallenged by dissenters—in a politically homogeneous field.”
Political bias also twists how data are interpreted. For instance, Duarte’s study discusses a paper in which subjects scoring high in “right-wing authoritarianism” were found to be “more likely to go along with the unethical decisions of leaders.” Example: “not formally taking a female colleague’s side in her sexual harassment complaint against her subordinate (given little information about the case).” Maybe what this finding really means is that conservatives believe in examining evidence first, instead of prejudging by gender. Call it “left-wing authoritarianism.”
The authors’ solution to the political bias problem is right out of the liberal playbook: diversity. Not just ethnic, race and gender but viewpoint diversity. All of us are biased, and few of us can see it in ourselves, so we depend on others to challenge us. As John Stuart Mill noted in that greatest defense of free speech, On Liberty, “He who knows only his own side of the case, knows little of that.”
February 1, 2016
Afterlife for Atheists

The soul is the pattern of information that represents you—your thoughts, memories and personality—your self. There is no scientific evidence that something like soul stuff exists beyond the brain’s own hardwiring, so I was curious to visit the laboratories of 21st Century Medicine in Fontana, Calif., to see for myself an attempt to preserve a brain’s connectome—the comprehensive diagram of all neural synaptic connections.
This medical research company specializes in the cryopreservation of human organs and tissues using cryoprotectants (antifreeze). In 2009, for example, the facility’s chief research scientist Gregory M. Fahy published a paper in the peer-reviewed journal Organogenesis, documenting how his team successfully transplanted a rewarmed rabbit kidney after it had been cryoprotected and frozen to -135 degrees Celsius through the process of vitrification, “in which the liquids in a living system are converted into the glassy state at low temperatures.”
Can brains be so preserved? Fahy and his colleague Robert L. McIntyre are now developing techniques that they hope will win the Brain Preservation Technology Prize, the brainchild of neuroscientist Kenneth Hayworth (I’m on their advisory board as the advocatus diaboli). As I write this, the prize is currently valued at more than $106,000; the first 25 percent of the award will be for the complete preservation of the synaptic structure of a whole mouse brain, and the other 75 percent will go to the first team “to successfully preserve a whole large animal brain in a manner that could also be adopted for humans in a hospital or hospice setting immediately upon clinical death.”
I witnessed the infusion of a rabbit brain through its carotid arteries with a fixative agent called glutaraldehyde, which binds proteins together into a solid gel. The brain was then removed and saturated in ethylene glycol, a cryoprotective agent eliminating ice formation and allowing safe storage at –130 degrees C as a glasslike, inert solid. At that temperature, chemical reactions are so attenuated that it could be stored for millennia. If successful, would it be proof of concept?
Think of a book in epoxy resin hardened into a solid block of plastic, McIntyre told me. “You’re never going to open the book again, but if you can prove that the epoxy doesn’t dissolve the ink the book is written with, you can demonstrate that all the words in the book must still be there … and you might be able to carefully slice it apart, scan in all the pages, and print/bind a new book with the same words.” Hayworth tells me that the rabbit brain circuitry he examined through a 3-D scanning electron microscope “looks well preserved, undamaged, and it is easy to trace the synaptic connections between the neurons.”
This sounds promising, but I have my doubts. Is a connectome precisely analogous to a program that can be uploaded in machine-readable format into a computer? Would a connectome so preserved and uploaded into a computer be the same as awakening after sleep or unconsciousness? Plus, there are around 86 billion neurons in a human brain with often 1,000 or more synaptic connections for each one, for a total of 100 trillion connections to be accurately preserved and replicated. Staggering complexity. And this doesn’t include the rest of the nervous system outside the brain, which is also part of your self that you might want resurrected.
Hayworth admitted to me that a “future of uploaded posthumans is probably centuries away.” Nevertheless, he adds, “as an atheist and unabashed materialist neuroscientist, I am virtually certain that mind uploading is possible.” Why? Because “our best neuroscience models say that all these perceptual and sensorimotor memories are stored as static changes in the synapses between neurons,” which is what connectomics is designed to record and preserve, allowing us to “‘hit pause’ for a few centuries if we need to.” Imagine a world in which “the fear of death, disease and aging would have been mostly removed,” he says.
It sounds utopian, but there’s something deeply moving in this meliorism. “I refuse to accept that the human race will stop technological and scientific progress,” Hayworth told me. “We are destined to eventually replace our biological bodies and minds with optimally designed synthetic ones. And the result will be a far healthier, smarter and happier race of posthumans poised to explore and colonize the universe.”
Per audacia ad astra.
January 1, 2016
Murder in the Cave

“Fossil First: Ancient Human Relative May Have Buried Its Dead” (Reuters). “Why Did Homo naledi Bury Its Dead?” (PBS). These are just two of the many hyped headlines that appeared last September in response to a paper purporting the discovery, in a cave in South Africa, of a new species by paleoanthropologist Lee R. Berger of the University of the. There were reasons for skepticism from the get-go.
The age of the fossils is undetermined, so it is impossible to conclude where in the hominin lineage the fossils fit. Their hands, wrists and feet are similar to small modern humans, though slightly modified for an arboreal existence, and their brain volume is closer to that of the small-brained australopithecines, like Lucy, so it is not clear that this combination constitutes a new species or a variation on an existing species. Instead of publishing in a prestigious journal such as Science or Nature, for which the peer-review process can be lengthy, the authors opted for a fast-track publication in eLIFE (elifesciences.org/content/4/e09561), an open-access online journal. And instead of meticulously sorting through the 1,550 fossils (belonging to 15 individuals) for many years, as is common in paleoanthropology, the analysis was concluded in a mere year and a half after their discovery in November 2013 and March 2014.
What triggered my skepticism, however, was the scientists’ conjecture that the site represents the earliest example of “deliberate body disposal,” which, as the media read between the lines, implies an intentional burial procedure. This, they concluded was the likeliest explanation compared with four other hypotheses.
Occupation. There is no debris in the chamber, which is so dark that habitation would have required artificial light, for which there is no evidence, and the cave is nearly inaccessible and appears never to have had easy access. Water transport. Caves that have been inundated show sedimentological layers of coarse-grained material, which is lacking in the Dinaledi Chamber where the specimens were uncovered. Predators. There are no signs of predation on the skeletal remains and no fossils from predators. Death trap. The sedimentary remains indicate that the fossils were deposited over a span of time, so that rules out a single calamitous event, and the near unreachability of the chamber makes attritional individual entry and death unlikely.
Finally, the ages of the 13 individuals so identified— three infants, three young juveniles, one old juvenile, one subadult, four young adults and one old adult—are unlike those of other cave deposits for which cause of death and deposition have been determined. It’s a riddle, wrapped in sediment, inside a grotto.
I believe the authors are downplaying an all too common cause of death in our ancestors—homicide in the form of war, murder or sacrifice. In his 1996 book War Before Civilization, for example, archaeologist Lawrence Keeley estimates that as many as 20 to 30 percent of ancestral men died violently. In his 2003 book Constant Battles, archaeologist Steven LeBlanc reports that nearly every ancient human site shows signs of either armed conflict between groups, homicide between individuals within a group or cannibalism. In his 2011 book The Better Angels of Our Nature, Steven Pinker aggregates a data set of 21 archaeological sites to show a violent death rate of 20 percent. In a 2013 paper in the journal Science, Douglas Fry and Patrik Söderberg dispute the theory that war was prevalent in ancient humans by claiming that of the 148 episodes of violence in 21 mobile foraging bands, more than half “were perpetrated by lone individuals, and almost two-thirds resulted from accidents, interfamilial disputes, within-group executions, or interpersonal motives such as competition over a particular woman.”
Whatever you call it—war or murder—it is violent death nonetheless, and further examination of the Homo naledi fossils should consider violence (war or murder for the adults, sacrifice for the juveniles) as a plausible cause of death and deposition in the cave. Recall that after 5,000-year old Ötzi the Iceman was discovered in a melting glacier in the Tyrolean Alps in 1991, it took a decade before archaeologists determined that he died violently, after he killed at least two other people in what appears to have been a clash between hunting parties. It’s a side of our nature we are reluctant to admit, but consider it we must when confronted with dead bodies in dark places.
December 1, 2015
Consilience and Consensus

At some point in the history of all scientific theories, only a minority of scientists—or even just one—supported them, before evidence accumulated to the point of general acceptance. The Copernican model, germ theory, the vaccination principle, evolutionary theory, plate tectonics and the big bang theory were all once heretical ideas that became consensus science. How did this happen?
An answer may be found in what 19th-century philosopher of science William Whewell called a “consilience of inductions.” For a theory to be accepted, Whewell argued, it must be based on more than one induction—or a single generalization drawn from specific facts. It must have multiple inductions that converge on one another, independently but in conjunction. “Accordingly the cases in which inductions from classes of facts altogether different have thus jumped together,” he wrote in his 1840 book The Philosophy of the Inductive Sciences, “belong only to the best established theories which the history of science contains.” Call it a “convergence of evidence.”
Consensus science is a phrase often heard today in conjunction with anthropogenic global warming (AGW). Is there a consensus on AGW? There is. The tens of thousands of scientists who belong to the American Association for the Advancement of Science, the American Chemical Society, the American Geophysical Union, the American Medical Association, the American Meteorological Society, the American Physical Society, the Geological Society of America, the U.S. National Academy of Sciences and, most notably, the Intergovernmental Panel on Climate Change all concur that AGW is in fact real. Why?
It is not because of the sheer number of scientists. After all, science is not conducted by poll. As Albert Einstein said in response to a 1931 book skeptical of relativity theory entitled 100 Authors against Einstein, “Why 100? If I were wrong, one would have been enough.” The answer is that there is a convergence of evidence from multiple lines of inquiry— pollen, tree rings, ice cores, corals, glacial and polar ice-cap melt, sea-level rise, ecological shifts, carbon dioxide increases, the unprecedented rate of temperature increase—that all converge to a singular conclusion. AGW doubters point to the occasional anomaly in a particular data set, as if one incongruity gainsays all the other lines of evidence. But that is not how consilience science works. For AGW skeptics to overturn the consensus, they would need to find flaws with all the lines of supportive evidence and show a consistent convergence of evidence toward a different theory that explains the data. (Creationists have the same problem overturning evolutionary theory.) This they have not done.
A 2013 study published in Environmental Research Letters by Australian researchers John Cook, Dana Nuccitelli and their colleagues examined 11,944 climate paper abstracts published from 1991 to 2011. Of those papers that stated a position on AGW, about 97 percent concluded that climate change is real and caused by humans. What about the remaining 3 percent or so of studies? What if they’re right? In a 2015 paper published in Theoretical and Applied Climatology, Rasmus Benestad of the Norwegian Meteorological Institute, Nuccitelli and their colleagues examined the 3 percent and found “a number of methodological flaws and a pattern of common mistakes.” That is, instead of the 3 percent of papers converging to a better explanation than that provided by the 97 percent, they failed to converge to anything.
“There is no cohesive, consistent alternative theory to humancaused global warming,” Nuccitelli concluded in an August 25, 2015, commentary in the Guardian. “Some blame global warming on the sun, others on orbital cycles of other planets, others on ocean cycles, and so on. There is a 97% expert consensus on a cohesive theory that’s overwhelmingly supported by the scientific evidence, but the 2–3% of papers that reject that consensus are all over the map, even contradicting each other. The one thing they seem to have in common is methodological flaws like cherry picking, curve fitting, ignoring inconvenient data, and disregarding known physics.” For example, one skeptical paper attributed climate change to lunar or solar cycles, but to make these models work for the 4,000-year period that the authors considered, they had to throw out 6,000 years’ worth of earlier data.
Such practices are deceptive and fail to further climate science when exposed by skeptical scrutiny, an integral element to the scientific process.
November 1, 2015
Perception Deception

One of the deepest problems in epistemology is how we know the nature of reality. Over the millennia philosophers have offered many theories, from solipsism (only one’s mind is known to exist) to the theory that natural selection shaped our senses to give us an accurate, or verdical, model of the world. Now a new theory by University of California, Irvine, cognitive scientist Donald Hoffman is garnering attention. (Google his scholarly papers and TED talk with more than 1.4 million views.) Grounded in evolutionary psychology, it is called the Interface Theory of Perception (ITP) and argues that percepts act as a species-specific user interface that directs behavior toward survival and reproduction, not truth.
Hoffman’s computer analogy is that physical space is like the desktop and that objects in it are like desktop icons, which are produced by the graphical user interface (GUI). Our senses, he says, form a biological user interface—a gooey GUI—between our brain and the outside world, transducing physical stimuli such as photons of light into neural impulses processed by the visual cortex as things in the environment. GUIs are useful because you don’t need to know what is inside computers and brains. You just need to know how to interact with the interface well enough to accomplish your task. Adaptive function, not veridical perception, is what is important.
Hoffman’s holotype is the Australian jewel beetle Julodimorpha bakewelli. Females are large, shiny, brown and dimpled. So, too, are discarded beer bottles dubbed “stubbies,” and males will mount them until they die by heat, starvation, or ants. The species was on the brink of extinction because its senses and brain were designed by natural selection not to perceive reality (it’s a beer bottle, you idiot!) but to mate with anything big, brown, shiny, and dimply.
To test his theory, Hoffman ran thousands of evolutionary computer simulations in which digital organisms whose perceptual systems are tuned exclusively for truth are outcompeted by those tuned solely for fitness. Because natural selection depends only on expected fitness, evolution shaped our sensory systems toward fitter behavior, not truthful representation.
ITP is well worth serious consideration and testing, but I have my doubts. First, how could a more accurate perception of reality not be adaptive? Hoffman’s answer is that evolution gave us an interface to hide the underlying reality because, for example, you don’t need to know how neurons create images of snakes; you just need to jump out of the way of the snake icon. But how did the icon come to look like a snake in the first place? Natural selection. And why did some nonpoisonous snakes evolve to mimic poisonous species? Because predators avoid real poisonous snakes. Mimicry works only if there is an objective reality to mimic.
Hoffman has claimed that “a rock is an interface icon, not a constituent of objective reality.” But a real rock chipped into an arrow point and thrown at a four-legged meal works even if you don’t know physics and calculus. Is that not veridical perception with adaptive significance?
As for jewel beetles, stubbies are what ethologists call supernormal stimuli, which mimic objects that organisms evolved to respond to and elicit a stronger response in doing so, such as (for some people) silicone breast implants in women and testosterone- enhanced bodybuilding in men. Supernormal stimuli operate only because evolution designed us to respond to normal stimuli, which must be accurately portrayed by our senses to our brain to work.
Hoffman says that perception is species-specific and that we should take predators seriously but not literally. Yes, a dolphin’s icon for “shark” no doubt looks different than a human’s, but there really are sharks, and they really do have powerful tails on one end and a mouthful of teeth on the other end, and that is true no matter how your sensory system works.
Also, computer simulations are useful for modeling how evolution might have happened, but a real-world test of ITP would be to determine if most biological sensory interfaces create icons that resemble reality or distort it. I’m betting on reality. Data will tell.
Finally, why present this problem as an either-or choice between fitness and truth? Adaptations depend in large part on a relatively accurate model of reality. The fact that science progresses toward, say, eradicating diseases and landing spacecraft on Mars must mean that our perceptions of reality are growing ever closer to the truth, even if it is with a small “t.”
October 1, 2015
The Electric Universe Acid Test

Newton was wrong. Einstein was wrong. Black holes do not exist. The big bang never happened. Dark energy and dark matter are unsubstantiated conjectures. Stars are electrically charged plasma masses. Venus was once a comet. The massive Valles Marineris canyon on Mars was carved out in a few minutes by a giant electric arc sweeping across the Red Planet. The “thunderbolt” icons found in ancient art and petroglyphs are not the iconography of imagined gods but realistic representations of spectacular electrical activity in space.
These are just a few of the things I learned at the Electric Universe conference (EU2015) in June in Phoenix. The Electric Universe community is a loose confederation of people who, according to the host organization’s website, believe that “a new way of seeing the physical universe is emerging. The new vantage point emphasizes the role of electricity in space and shows the negligible contribution of gravity in cosmic events.” This includes everything from comets, moons and planets to stars, galaxies and galactic clusters.
I was invited to speak on the difference between science and pseudoscience. The most common theme I gleaned from the conference is that one should be skeptical of all things mainstream: cosmology, physics, history, psychology and even government (I was told that World Trade Center Building 7 was brought down by controlled demolition on 9/11 and that “chemtrails”—the contrails in the sky trailing jets—are evidence of a government climate-engineering experiment).
The acid test of a scientific claim, I explained, is prediction and falsification. My friends at the NASA Jet Propulsion Laboratory, for example, tell me they use both Newtonian mechanics and Einstein’s relativity theory in computing highly accurate spacecraft trajectories to the planets. If Newton and Einstein are wrong, I inquired of EU proponent Wallace Thornhill, can you generate spacecraft flight paths that are more accurate than those based on gravitational theory? No, he replied. GPS satellites in orbit around Earth are also dependent on relativity theory, so I asked the conference host David Talbott if EU theory offers anything like the practical applications that theoretical physics has given us. No. Then what does EU theory add? A deeper understanding of nature, I was told. Oh.
Conventional psychology was challenged by Gary Schwartz of the University of Arizona, who, in keeping with the electrical themes of the day, explained that the brain is like a television set and consciousness is like the signals coming into the brain. You need a brain to be conscious, but consciousness exists elsewhere. But TV studios generate and broadcast signals. Where, I inquired, is the consciousness equivalent to such production facilities? No answer.
A self-taught mathematician named Stephen Crothers riffled through dozens of PowerPoint slides chockablock full of equations related to Einstein’s general theory of relativity, which he characterized as “numerology.” Einstein’s errors, Crothers proclaimed, led to the mistaken belief in black holes and the big bang. I understood none of what he was saying, but I am confident he’s wrong by the fact that for a century thousands of physicists have challenged Einstein, and still he stands as Time’s Person of the Century. It’s not impossible that they are all wrong and that this part-time amateur scientist sleuth is right, but it is about as likely as the number of digits after the decimal place in Einstein’s equations accurately describing the relativistic effects on those GPS satellite orbits.
The EU folks I met were unfailingly polite, unquestionably smart and steadfastly unwavering in their belief that they have made one of the most important discoveries in the history of science. Have they? Probably not. The problem was articulated in a comment Thornhill made when I asked for their peer-reviewed papers: “In an interdisciplinary science like the Electric Universe, you could say we have no peers, so peer review is not available.” Without peer review or the requisite training in each discipline, how are we to know the difference between mainstream and alternative theories, of which there are many?
In his book The Electric Kool-Aid Acid Test, Tom Wolfe quotes Merry Prankster Ken Kesey: “You’re either on the bus or off the bus.” It’s not that EUers are wrong; they’re not even on the bus.
Michael Shermer's Blog
- Michael Shermer's profile
- 1146 followers
