Michael Shermer's Blog, page 2

May 17, 2018

Free to Inquire

The Evolution-Creationism Controversy as a Test Case in Equal Time and Free Speech


A book chapter for the The Palgrave Handbook of Philosophy and Public Policy (December 26, 2018), edited by David Boonin.



During the second week of March, 1837, barely a year and a half after circumnavigating the globe in the H.M.S. Beagle, Charles Darwin met with the eminent ornithologist John Gould, who had been studying Darwin’s Galápagos bird specimens. With access to museum ornithological collections from areas of South America that Darwin had not visited, Gould corrected a number of taxonomic errors Darwin had made, such as labeling two finch species a “Wren” and “Icterus”, and pointed out to him that although the land birds in the Galápagos were endemic to the islands, they were notably South American in character.



According to the historian of science Frank J. Sulloway, who carefully reconstructed Darwin’s intellectual voyage to the discovery of the theory of evolution by means of natural selection, Darwin left the meeting with Gould convinced “beyond a doubt that transmutation must be responsible for the presence of similar but distinct species on the different islands of the Galápagos group. The supposedly immutable ‘species barrier’ had finally been broken, at least in Darwin’s own mind.”1 That July Darwin opened his first notebook on Transmutation of Species. By 1844 he was confident enough to write in a letter to his botanist friend and colleague Joseph Hooker: “I was so struck with distribution of Galapagos organisms &c &c, & with the character of the American fossil mammifers &c &c, that I determined to collect blindly every sort of fact which cd bear any way on what are species.” Five years at sea and nine years at home pouring through “heaps” of books led Darwin to admit: “At last gleams of light have come, & I am almost convinced, (quite contrary to opinion I started with) that species are not (it is like confessing a murder) immutable.”2



Like confessing a murder. How could a solution to a technical problem in biology, namely the immutability of species, generate such angst in its discoverer? The answer is obvious: if new species are created naturally instead of supernaturally, there’s no place for a creator God. No wonder Darwin waited twenty years before publishing his theory, and he would have waited even longer had he not rushed into print for priority sake because the naturalist Alfred Russel Wallace had sent Darwin his own theory of evolution in 1858, the year before Darwin published On the Origin of Species.3 And no wonder it took some time for Darwin to convince others of the theory’s veracity. The geologist Charles Lyell, a close friend and colleague of Darwin who groomed him into the world of British science and whose geological works Darwin read on the Beagle, withheld his support for a full nine years, and even then pulled back from fully embracing naturalism, leaving room for providential design underlying the entire natural system. The astronomer John Herschel sniffed at natural selection, calling it the “law of higgledy-piggledy.” In a review in the popular Macmillan’s Magazine, the statesman and economist Henry Fawcett spoke of a great divide created by Darwin’s book: “No scientific work that has been published within this century has excited so much general curiosity as the treatise of Mr. Darwin. It has for a time divided the scientific world with two great contending sections. A Darwinite and an anti-Darwinite are now the badges of opposed scientific parties.”4



Darwinites and anti-Darwinites. After a century and a half there is now overwhelming consensus within the scientific community that evolution happened and that natural selection is the driving force behind it. Among scientists, there are only Darwinites. Publically, however, the picture is disturbingly divided, especially along political and religious lines, where the anti-Darwinites have captured a sizable portion of the populace. A 2005 Pew Research Center poll, for example, found 42 percent of Americans holding strict creationist views that “living things have existed in their present form since the beginning of time.” The survey also found that 64 percent said they were open to the idea of teaching creationism in addition to evolution in public schools, while 38 percent said they think evolution should be completely replaced by creationism in biology classrooms. Most alarmingly, a sizable 41 percent believe that parents, rather than scientists (28 percent) or school boards (21 percent) should be responsible for teaching children about the origin and evolution of life.5 More recent polling data found similar percentages of belief in creationism and skepticism about evolution. In a 2014 Gallup poll 42 percent of Americans said that “God created humans in present form” while 31 percent said “Humans evolved, with God guiding.” There was a slight uptick to 19 percent of Americans who agreed that “Humans evolved, but God had no part in the process,” but that was at least a significant gain from the paltry 9 percent in 1982.



None of this polling data should matter. Truth in science is not determined vox populi. It shouldn’t matter how many people support one or another position. As Einstein said in response to a 1931 book skeptical of relativity theory titled A Hundred Authors Against Einstein, “Why one hundred? If I were wrong, one would have been enough.”6 A theory stands or falls on evidence, and there are few theories in science that are more robust than the theory of evolution. Arguably the most culturally jarring theory in the history of science, the Darwinian revolution changed both science and culture in at least five ways:




The static creationist model of species as fixed types was replaced with a fluid evolutionary model of species as ever-changing entities.


The theory of top-down intelligent design through a supernatural force was replaced with the theory of bottom-up natural design through natural forces.


The anthropocentric view of humans as special creations above all others was replaced with the view of humans as just another animal species.


The view of life and the cosmos as having design, direction, and purpose from above was replaced with the view of the world as the product of bottom-up design through necessitating laws of nature and contingent events of history.


The view that human nature is infinitely malleable and primarily good, or born in original sin and inherently evil, was replaced with the view of a constraining human nature in which we are both good and evil.7



When he first heard Darwin’s theory, the man who would earn the moniker “Darwin’s Bulldog” for his fierce defense of evolution, Thomas Henry Huxley, called Darwin’s On the Origin of Species “the most potent instrument for the extension of the realm of knowledge which has come into man’s hands since Newton’s Principia.”8 A century later the Harvard evolutionary biologist Ernst Mayr opined, “it would be difficult to refute the claim that the Darwinian revolution was the greatest of all intellectual revolutions in the history of mankind.”9 And in the memorable and oft-quoted observation by the evolutionary theorist Theodosius Dobzhansky, “Nothing in biology makes sense except in the light of evolution.”10 If the theory of evolution is so proven and profound, why doesn’t everyone accept it as true?



Why People Do Not Accept Evolution



It is evident that there are a number of extra-scientific variables that factor into the beliefs people hold about scientific theories, and in this case additional polling data show who is more or less likely to accept evolution based on their religious and political attitudes. In the 2014 Gallup Poll mentioned above, 69 percent of Americans who attend religious services weekly embrace creationism over evolution, compared to only 23 percent of those who seldom or never attend religious services.11 A 2013 Pew Research Center survey found that white evangelical Protestants are more likely to believe that humans have existed in their present form since the beginning of time at 64 percent, compared to half of black Protestants and only 15 percent of white mainline Protestants.12 A 2017 Gallup Poll found that 57 percent of those with no religious preferences agreed with the statement “Humans evolved, God had no part in process” compared to only 6 percent of Protestants and 11 percent of Catholics, and only 1 percent of those who attend church weekly agreed with this statement, compared to 35 percent who rarely attend church.13



The underlying foundation behind this religious-based skepticism of evolution may be traced back to the early 20th century when anti-evolution legislation was sweeping southern states, most famously Tennessee. At the climax of the 1925 Scopes “monkey” Trial in Dayton, William Jennings Bryan, testifying on behalf of the prosecution against a young biology teacher name John T. Scopes, prepared a final statement summarizing what he understood to be what was really at stake in the trial. The judge determined that Bryan’s speech was irrelevant to the case—the same ruling he made against the defense when they called on evolutionary biologists as expert witnesses—so it was published posthumously (Bryan died two days after the trial ended) as Bryan’s Last Speech: The Most Powerful Argument Against Evolution Ever Made.14 The most telling summation of the anti-evolution position in Bryan’s view was as follows: “The real attack of evolution, it will be seen, is not upon orthodox Christianity or even upon Christianity, but upon religion—the most basic fact in man’s existence and the most practical thing in life. If taken seriously and made the basis of a philosophy of life, it would eliminate love and carry man back to a struggle of tooth and claw.” This is what troubles people about evolutionary theory and leads them to doubt its verisimilitude, not the technical details of the science. The syllogistic reasoning goes like this:





The theory of evolution implies that there is no God.


Without a belief in God there can be no morality or meaning.


Without morality and meaning there is no basis for a civil society.


Without a civil society we will be reduced to living like brute animals.




This sentiment was expressed by the Intelligent Design theory supporter Nancy Pearcey in a briefing before a House Judiciary Committee of the United States Congress, when she quoted from a popular song urging “you and me, baby, ain’t nothing but mammals so let’s do it like they do on the Discovery Channel.” She went on to claim that since the U.S. legal system is based on moral principles, the only way to generate ultimate moral grounding is for the law to have an “unjudged judge,” an “uncreated creator.”15 The neo-conservative social commentator Irving Kristol was even more bleak in a 1991 statement: “If there is one indisputable fact about the human condition it is that no community can survive if it is persuaded—or even if it suspects—that its members are leading meaningless lives in a meaningless universe.”16



In an attempt to distance themselves from “scientific creationists,” Intelligent Design theorists have emphasized that they are only interested in doing science. According to the prominent ID proponent William Dembski, for example, “scientific creationism has prior religious commitments whereas intelligent design does not.”17 This is disingenuous. On February 6, 2000, Dembski told the National Religious Broadcasters at their annual conference in Anaheim, California: “Intelligent Design opens the whole possibility of us being created in the image of a benevolent God…. The job of apologetics is to clear the ground, to clear obstacles that prevent people from coming to the knowledge of Christ. … And if there’s anything that I think has blocked the growth of Christ as the free reign of the Spirit and people accepting the Scripture and Jesus Christ, it is the Darwinian naturalistic view.”18 In a feature article in the Christian magazine Touchstone, Dembski was even more succinct: “Intelligent design is just the Logos theology of John’s Gospel restated in the idiom of information theory.”19



The sentiment was echoed by one of the fountainheads of the modern Intelligent Design movement, Phillip Johnson, at the same National Religious Broadcasters meeting at which Dembski spoke: “Christians in the twentieth century have been playing defense. They’ve been fighting a defensive war to defend what they have, to defend as much of it as they can. It never turns the tide. What we’re trying to do is something entirely different. We’re trying to go into enemy territory, their very center, and blow up the ammunition dump. What is their ammunition dump in this metaphor? It is their version of creation.”20 Johnson was even blunter in 1996: “This isn’t really, and never has been, a debate about science…. It’s about religion and philosophy.”21 In his book The Wedge of Truth, Johnson explained: “The Wedge of my title is an informal movement of like-minded thinkers in which I have taken a leading role. Our strategy is to drive the thin end of our Wedge into the cracks in the log of naturalism by bringing long-neglected questions to the surface and introducing them to public debate.” This is not just an attack on naturalism—it is a religious war against all of science. “It is time to set out more fully how the Wedge program fits into the specific Christian gospel (as distinguished from generic theism), and how and where questions of biblical authority enter the picture. As Christians develop a more thorough understanding of these questions, they will begin to see more clearly how ordinary people—specifically people who are not scientists or professional scholars—can more effectively engage the secular world on behalf of the gospel.”22 The new creationism may differ in the details from the old creationism, but their ultimate goals run parallel. The veneer of science in the guise of Intelligent Design theory is there to cover up the deeper religious agenda.



Equal Time and Free Speech



This volume relates to public policy. Engrained in the American public psyche is the sense of fair play for all ideas and free speech for everyone. What’s wrong with giving equal time to evolution and creationism and letting the people decide for themselves? This is, in fact, what has become known as the “equal time” argument proffered by proponents of “creation science” in the 1980s and “intelligent design” in the 1990s. It’s an argument that appeals to fair-minded people, but that cannot be put into practice in public schools, which is where the evolution-creation battles have been fought. The problem is that there are at least ten different positions one might take on the creation-evolution continuum, including:




Flat Earthers, who believe that the shape of the earth is flat and round like a coin, which some believers contend has a biblical basis.



Geocentrists, who believe that the earth is spherical but that the planets and sun revolve around it, also believed to be grounded in Genesis scriptures.



Young-Earth Creationists, who believe that the earth and all life on it was created within the last ten thousand years.



Old Earth Creationists, who believe that the earth is ancient and microevolution may alter organisms into different varieties of species, but that all life was created by God, and that species cannot evolve into new species.



Gap Creationists, who believe that there was a large temporal gap between Genesis I:1 and I:2, in which a pre-Adam creation was destroyed, after which God recreated the world in six days; the time gap between the two separate creations allows for an accommodation of an old Earth with the special creation.



Day-Age Creationists, who believe that each of the six days of creation represents a geological epoch, and that the Genesis sequence of creation roughly parallels the sequence of evolution.



Progressive Creationists, who accept most scientific findings about the age of the universe and that God created “kinds” of animals sequentially; the fossil record is an accurate representation of history because different animals and plants appeared at different times rather than having been created all at once.



Intelligent Design Creationists, who believe that the order, purpose, and design found in the world is proof of an intelligent designer.



Evolutionary Creationists, who believe that God used evolution to bring about life according to his foreordained plan from the beginning.



Theistic Evolutionists, who believe that God used evolution to bring about life, but intervenes at critical intervals during the history of life.23




If equal time were granted to all of these positions, along with the many other creation myths from diverse cultures around the world, when would students have time to learn science? Given limited time and resources, and the ever-expanding body of scientific knowledge that students in a 21st century society simply must learn for our nation to stay relevant economically, such ideas have no place in science classrooms where curricula are determined by the consensus science of the field, not polls on what the public believe. The place for introducing these ideas is in courses on history, cultural studies, comparative mythology, and world religions. In any case, as far as public policy is concerned, creationists have lost all major court cases of the past half-century—most notably Epperson v. Arkansas in 1968, McLean v. Arkansas Board of Education in 1982, Edwards v. Aguillard in 1987, and Kitzmiller et al. v. Dover in 2005—so legal precedent means that the chances of creationists or Intelligent Design proponents gaining access to public school science classrooms through legislation is nil.24 Consensus science cannot be legislated by fiat from the top down. In the 1920s when evolutionary theory was not widely accepted and politically connected religious groups were successful in passing anti-evolution legislation making it a crime to teach Darwin’s theory in public schools, the noted attorney and civil liberties defender Clarence Darrow made this case against the censorship of knowledge in the Scopes case:




If today you can take a thing like evolution and make it a crime to teach it in the public school, tomorrow you can make it a crime to teach it in the private schools, and the next year you can make it a crime to teach it in the hustings or in the church. At the next session you may ban books and the newspapers. Soon you may set Catholic against Protestant and Protestant against Protestant, and try to foist your own religion upon the minds of men. If you can do one you can do the other. Ignorance and fanaticism is ever busy and needs feeding. Always it is feeding and gloating for more. Today it is the public school teachers, tomorrow the private. The next day the preachers and the lecturers, the magazines, the books, the newspapers. After awhile, your honor, it is the setting of man against man and creed against creed until with flying banners and beating drums we are marching backward to the glorious ages of the sixteenth century when bigots lighted fagots to burn the men who dared to bring any intelligence and enlightenment and culture to the human mind.25




In America, the First Amendment protects the right of citizens to express their opinions on anything they like, no matter how crazy, conniving, evil, or extreme. You are free to doubt the single-bullet theory in the JFK assassination, the real cause of the death of Princess Diana, the Apollo moon landing, the existence of God, the divinity of Jesus, the authenticity of the Quran, the prophetic nature of Moses or Muhammad, al Qaeda’s role in 9/11, and even the President’s birthplace. No matter how much one may dislike someone else’s opinion—even if it is something as disturbing or potentially disruptive as denying that the Holocaust happened or that some people may not be as successful because of innate racial or gender differences—that opinion is protected by the First Amendment. Not everyone thinks such freedom is good for a safe civil society. In particular, and paradoxically given the fact that the free speech movement began at U.C. Berkeley in the 1960s, the past several years have seen campuses around the country erupt in flames over these charged issues, issuing lists of micro-aggressions that might offend people, trigger warnings about books that might upset readers, safe spaces to go to for protection from dangerous ideas, and the dis-invitation of speakers who might espouse ideas different from the majority of people in the audience.26 Shouldn’t we protect people from speech that might be hateful and thus harmful? No. Here are eight reasons why.




Who decides which speech is acceptable and which is unacceptable? You? Me? The majority? The control of speech is how dictatorships and autocracies rule. We must resist the urge to control what other people say and think.


What criteria are used to censor certain speech? Ideas that I disagree with? Thoughts that differ from your thoughts? Anything that the majority determines is unacceptable? That’s another form of tyranny, a tyranny of the majority.


We might be completely right but still learn something new.


We might be partially right and partially wrong, and by listening to other viewpoints we might stand corrected and refine and improve our beliefs.


We might be completely wrong, so hearing criticism or counterpoint gives us the opportunity to change our minds and improve our thinking. No one is infallible. The only way to find out if you’ve gone off the rails is to get feedback on your beliefs, opinions, and even your facts.


Whether right or wrong, by listening to the opinions of others we have the opportunity to develop stronger arguments and build better facts for our positions.


My freedom to speak and dissent is inextricably tied to your freedom to speak and dissent. If I censor you, why shouldn’t you censor me? If you silence me, why shouldn’t I silence you? Once customs and laws are in place to silence someone on one topic, what’s to stop people from silencing anyone on any topic that deviates from the accepted canon?



There are exceptions to the purely civil libertarian case for free speech, of course, most famously Justice Potter Stewart’s concerns about false fire warnings and crowded theaters, which was wrongly applied to the ill-conceived idea that “hate speech” might incite people to violence, applied as it was to a group of Yiddish-speaking pacifists who objected to America’s involvement in the First World War. And, of course, you are not free to spread lies about someone that damages their reputation, safety, or income. But never in history have a people been so free to speak their mind, and from that freedom emerges the truth, for the only way to know if your idea is wrong is to allow others to critique it.



Science and Society



That principle—the freedom to participate in the dialogue that the philosopher Karl Popper called “conjecture and refutation”—is at the heart of the scientific method.27 The reason we need critical feedback from others is that our brains come equipped with a set of cognitive heuristics—rules of thumb, or shortcuts—that help us navigate through the buzzing blurring confusion of information coming in through our senses. These heuristics are also known as cognitive biases because they often distort our percepts to fit preconceived concepts. These cognitive biases are part of a larger process called “motivated reasoning,” in which no matter what belief system is in place—religious, political, economic, or social—they shape how we interpret information that comes through our senses and motivate us to reason our way to finding the world to be precisely the way we wish it were. As I argue in The Believing Brain, our beliefs are formed for a variety of subjective, emotional, psychological, and social reasons, and then are reinforced through these belief confirmation heuristics and justified and explained with rational reasons.28 The confirmation bias, the hindsight bias, the self-justification bias, the status quo bias, the sunk-cost bias, the availability bias, the representative bias, the believability bias, the authority bias, and the consistency bias are just a few of the many ways we distort the world.



It is not so much that scientists are trained to avoid these cognitive biases as it is that science itself is designed to force you to ferret out your errors and prejudices because if you don’t someone else will, often with great glee in a public forum, from peer-review commentary to social media (where all pretensions to civil discourse are stripped away). Science is a competitive enterprise that is not for the thin-skinned or faint of heart. Most ideas that people come up with are wrong. That is why science is so cautious about tossing aside old ideas that have already survived the competitive marketplace, and why scientists tend to dismiss out of hand new ideas that threaten a tried-and-true research paradigm, especially before the revolutionary theory has been properly vetted by professionals in the field. That process of generating new ideas and introducing them to your peers and the public where they can be skeptically scrutinized in the bright light of other minds is the only way to find out if you’ve come up with something true and important or if you’ve been immersed in self-deception. Evolutionary scientists have gone through this rigorous process for over a century and a half and the theory has emerged all the stronger for it. Creationists, by contrasts, have actively avoided this scientific scrutiny and as a result have been marginalized to the point of irrelevance.



Such is the fate of most ideas—many are called, few are chosen. Science works because it is premised on debate and disputation, conjecture and refutation, and especially free and open inquiry, which together override our many cognitive biases that blind us individually to our errors, but collectively allow us to progress to an ever deeper and broader understanding of nature. As the physicist and former scientific director of the Manhattan Project that built the first atomic bombs, J. Robert Oppenheimer, reflected on the limits of science: “There must be no barriers to freedom of inquiry. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.” Reflecting on the history of science and extrapolating to wider spheres, Oppenheimer added: “Our political life is also predicated on openness. We know that the only way to avoid error is to detect it and that the only way to detect it is to be free to inquire. And we know that as long as men are free to ask what they must, free to say what they think, free to think what they will, freedom can never be lost, and science can never regress.”29




References




Frank Sulloway, “The Legend of Darwin’s Finches,” Nature, 303 (1983): 372.


Letter to Joseph Hooker dated January 14, 1844, quoted in Janet Browne, Voyaging: Charles Darwin. A Biography. (New York: Knopf. 1995), 452.


For a detailed account of the “priority dispute” between Darwin and Wallace, see: Michael Shermer, In Darwin’s Shadow: The Life and Science of Alfred Russel Wallace. (New York: Oxford University Press, 2002).


All quotes on the reaction to Darwin’s theory in: Kenneth Korey, The Essential Darwin: Selections and Commentary. (Boston: Little, Brown, 1984).


Pew Research Center. “Religion a Strength and Weakness for Both Parties. Public Divided on Origins of Life.” (2005): http://bit.ly/2kFVHu6


Hans Israel, Erich Huckhaber, Rudolf Weinmann (Eds.) Hundert Autoren gegen Einstein. (Leipzig: Voigtländer, 1931).


Adopted and paraphrased from: Ernst Mayr, The Growth of Biological Thought. (Cambridge: Harvard University Press, 1982), 501.


Thomas H. Huxley, “The Origin of Species” (review). West. Rev. (1860): 17: 541–570.


Ernst Mayr, Toward a New Philosophy of Biology. (Cambridge: Harvard University Press. 1988), 161.


Theodosius Dobzhansky, “Nothing in Biology M 10 akes Sense Except in the Light of Evolution.” American Biology Teacher, 35 (1973): 125–129.


Pew, 2005.


Pew Research Center. “Public’s Views on Human Evolution.” (2013): http://pewrsr.ch/19BIvfh


Gallup Poll. “In U.S., Belief in Creationist View of Humans at New Low.” (2017): http://bit.ly/2CJ7Hm4


William Jennings Bryan, Bryan’s Last Speech: The Most Powerful Argument Against Evolution Ever Made. (Sunlight Publishing Society, 1925).


The three-hour briefing was held on May 10, 2000. Quoted in D. Wald, “Intelligent Design Meets Congressional Designers.” Skeptic. Vol. 8, No. 2 (2000), 16–17.


Quoted in Ron Bailey, “Origin of the Specious.” Reason. July (1997).


William Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design. (Downers Grove, IL: InterVarsity Press, 2004), 41.


Quoted in: Steve Benen. “Science Test.” Church & State, July/August, (2000): http://bit.ly/2z5Uf9u


William Dembski. “Signs of Intelligence: A Primer on the discernment of Intelligent Design.” (Touchstone, 1999), 84.


Benen, 2000.


Quoted in Jay Grelen, “Witnesses for the Prosecution.” World, November 30, (1996): http://bit.ly/2B7Z3Ns


Phillip Johnson, The Wedge of Truth: Splitting the Foundations of Naturalism. (Downers Grove, IL: InterVarsity Press, 2000).


These and other variations on creationism are discussed on the web site of the National Center for Science Education: https://ncse.com/


Molleen Matsumura and Louise Mead. “Ten Major Court Cases about Evolution and Creationism.” (National Center for Science Education): http://bit.ly/29XQZpy


Quoted in: The World’s Most Famous Court Trial. Tennessee Evolution Case: A Complete Stenographic Report of the Famous Court Test of the Tennessee Anti-Evolution Act, at Dayton, July 10–21, 1925, Including Speeches and Arguments of Attorneys. (Clark, NJ: The Lawbook Exchange, Ltd. Google eBook), 87: http://bit.ly/1MMj2SZ


Greg Lukianoff, Freedom From Speech. (New York: Encounter Books, 2014).


Karl Popper, Conjectures and Refutations: The 27 Growth of Scientific Knowledge. (New York: Harper & Row, 1963).


Michael Shermer, The Believing Brain. (New York: Henry Holt, 2011).


J. Robert Oppenheimer, in “J. Robert Oppenheimer.” Lincoln Barnett (Life, Vol. 7, No. 9, 58): 1949.
2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on May 17, 2018 12:00

May 12, 2018

Frequent Infrequencies

Do anomalies prove the existence of God?



This op-ed was originally published on Slate.com as part of a Big Ideas series on the question “What is the Future of Religion” in 2015.



For a quarter century I have investigated and attempted to explain anomalous events that people report experiencing, and I have written about a few of my own, such as being abducted by aliens (caused by extreme fatigue and sleep deprivation), hallucinating inside a sensory deprivation tank, and having an out-of-body experience while my temporal lobes were stimulated with electro-magnetic fields. Most people interpret such experiences as evidence for the supernatural, the afterlife, or even God, but since mine all had clear and obvious natural explanations few readers took them to be evidentiary.



In my October, 2014 column in Scientific American entitled “Infrequencies” however, I wrote about an anomalous experience for which I have no explanation. In brief, my fiancé, Jennifer Graf, moved to Southern California from Köln, Germany, bringing with her a 1978 Phillips 070 transistor radio that belonged to her late grandfather Walter, a surrogate father figure as she was raised by a single mom. She had fond memories of listening to music with him through that radio so I did my best to resurrect it, without success. With new batteries and the power switch left in the “on” position, we gave up and tossed it in a desk drawer where it lay dormant for months. During a quiet moment after our vows at a small wedding ceremony at our home, Jennifer was feeling sad being so far from home and wishing she had some connection to loved ones—most notably her mother and her grandfather—with whom to share this special occasion. We left my family to find a quiet moment alone elsewhere in the house when we heard music emanating from the bedroom, which turned out to be a love song playing on that radio in the desk drawer. It was a spine-tingling experience. The radio played for the rest of the evening but went quiescent the next day. It’s been silent ever since, despite repeated attempts to revive it.



Ever since the column appeared in Scientific American I’ve been deluged with letters. A few grumpy skeptics chided me for lowering my skeptical shields, most notably for my closing line: “And if we are to take seriously the scientific credo to keep an open mind and remain agnostic when the evidence is indecisive or the riddle unsolved, we should not shut the doors of perception when they may be opened to us to marvel in the mysterious.” I was simply trying to be a little poetic in my interpretation, which I qualified by noting “The emotional interpretations of such anomalous events grant them significance regardless of their causal account.”



A few cranky believers were dismissive of my openness, one insisting “that no human being, nor any living thing, is only their body. Also, no inanimate object is only that object. The dead do not die, and the living are not free but bound and enslaved each to his or her own ignorance—a condition which you work to maintain. Shame on you, sir.” Above her signature she signed off: “With kind intentions.”



Friendlier believers sent encouraging notes, not all of which I understand, such as this sentiment from a psychologist: “The central importance of latent, neglected shared spiritual capabilities was indeed a wedding blessing, eloquently and vividly enacted, resulting in very valuable sharing for a world culture remarkably crippled in appreciation of actual multidimensional reality.” Does 3D count? A neurophysiologist imagined what the implications would be if no natural explanation were forthcoming for my anomalous event. “Should consciousness survive the death of the brain, there are exciting implications for the role of consciousness in the living brain.” Indeed there is, but a lack of causal explanation for my story does not imply this.



A geologist wrote to suggest that “There are many explanations that can be posited; I would favor solar flares or the geoparticles of Holub and Smrz [authors of a paper that some claim proves that nanoparticles between neurons may allow for quantum fields to influence other brains], but rather than seek one, this coincidental occurrence should be enjoyed in the supernatural or paranormal vein as it was meant to be…simply a blessing for a long and happy union.” I agree, but without the supernatural or paranormal vein in the rock.



Another correspondent said he would be convinced of the miraculous nature of the event if the radio played for the next 20 years with no power source. That would impress me too, and maybe Elon Musk is working on such technology for his next generation of Tesla cars.



Most of the correspondence I received, however, was from people recounting their own anomalous experiences that had deep personal meaning for them, some pages long in rich detail. One woman told me the story of her rare blue opal pendant that she wore 24/7 for 15 years, until her ex-husband swiped it out of spite during their divorce. (So I guess this would be a case of negative emotions influencing events at a distance.) She felt so bad that while on vacation in Bali she had a jeweler create a simulacrum of it, which led to a successful jewelry business. One day 15 years later, a woman named Lucy came into her store and they got to talking about the lost opal pendant, which Lucy suddenly realized that she now owned. “In 1990 her best friend was dating a guy who was going through a divorce and he had given it to her. Her friend never felt comfortable wearing it so she offered it to Lucy. Lucy accepted, and wore it the following weekend on her wedding day. Soon after, she discovered her new husband had a girlfriend, and she never wore the opal again, thinking it might be bad luck. It remained in her drawer for 15 years. When I asked why she hadn’t sold it (it was now extremely valuable), she said ‘I tried to—every time I went to get it out of the drawer to have it appraised, something happened to distract me. Phone calls, dogs fighting, package deliveries—I tried many times, but never succeeded. Now I know why—it wanted to come back to you!’” This woman’s sister, whom she characterized as a “medical intuitive and remote healer,” called this story “Epic Synchronicity.” She described it as “fantastic and statistically improbable, but it is explainable.”



I agree, but what is the explanation for this, or for any of such highly improbable events? And what do they mean? For Jennifer and me, it was the propitious timing of the radio’s revival—at the moment she was thinking about family—that made it such an emotionally salient event, enabling her to feel as if her beloved grandfather was there with us, sharing in our commitment. Is it proof of life after death? No. As I wrote (and many readers apparently chose to overlook) in Scientific American, “such anecdotes do not constitute scientific evidence that the dead survive or that they can communicate with us via electronic equipment.”



The reason is that in science it isn’t enough to just compile anecdotes in support of a preferred belief. After all, who wouldn’t want to know that we survive bodily death and live for eternity elsewhere? We are all subject to the confirmation bias in which we look for and find confirming evidence and ignore disconfirming evidence. We remember one-off highly unusual coincidences that have deep meaning for us, and forget all the countless meaningless coincidences that flow past our senses every day. Then there is the law of large numbers: with seven billion people having, say, 10 experiences a day of any kind, even million-to-one odds will happen 70,000 times a day. It would be a miracle if at least a few of those events did not get remembered, recounted, reported, and recorded somewhere, leaving us with a legacy of frequent infrequencies. Add to this the hindsight bias, in which we are impressed by the improbability of an event after-the-fact, but in science we should only be impressed by events whose occurrence was predicted in advance. And don’t forget the recall bias, in which we remember things that happened differently depending on what we now believe, retrieving from memory circumstances that favor the preferred interpretation of the event in question. Then there is the matter of what didn’t happen that would have been equally spine-tingling in emotional impact on that day, or some other important day, and in my case I can’t think of any because they didn’t happen. Finally, just because I can’t explain something doesn’t mean it is inexplicable by natural means. The argument from personal incredulity doesn’t hold water on the skeptical seas.



As for plausible explanations, one correspondent suggested “that the on-off switch contacts were probably heavily oxidized and that the radio itself was turned on and then stay, as you have inserted the new batteries. By heating and cooling and vibration or small metal parts in a typical 1970s transistor suddenly corrode and make contact. The timing of this process…well, that is just simply remarkable.” A physicist and engineer from Athens, Greece, thought perhaps after my “percussive” technique of smacking the radio on a hard surface, “A critical capacitor at the flow of the current, maybe at the power stage, or at the receiving stage, or at the final amplifier’s stage may had been left in a just quasi-stable soldering state and by the aid of the ambient EM fields may had reach a charging state (leave an empty capacitor for some days out in the yard and you’ll get it almost fully charged) that by the presence of the supply voltage at the soldering spot could have bridged the possible gap of the old or disturbed soldering contact and then sustained this conduction for some hours until by a simple sock may had fully discharged.”



I’m not sure what this means, exactly, because my attempts to resuscitate the radio happened months before, but I can well imagine some electrical glitch, a particle of dust, an EM (electromagnetic) fluctuation from the batteries—something in the natural world—caused the radio to come to life. Why it would happen at that particular moment, and be perfectly tuned to a station playing love songs, and be loud enough to hear out of the desk drawer, is what made the event stand out for us. Which reminds me of an account I read of witchcraft and magic among the Azande, a traditional society in the Southern Sudan in Africa, by the anthropologist E. E. Evans-Pritchard. He explained that the Zande employ natural causes when they are readily available. When an old granary collapses, for example, the Zande understand that termites are the probable cause. But when the granary crumples with people inside who are thereby injured, the Zande wonder, in Evans-Pritchard’s words, “why should these particular people have been sitting under this particular granary at the particular moment when it collapsed? That it should collapse is easily intelligible, but why should it have collapsed at the particular moment when these particular people were sitting beneath it?” That timing is explained by magic.



Deepak Chopra suggested something similar to us when he wrote “The radio coming on and off almost certainly has a mechanical explanation (a change in humidity, a speck of dust falling off a rusty wire, etc.). What is uncanny is the timing and emotional significance to those participating in the experience. The two of you falling in love is part of the synchronicity!” The Azande magical explanation is not too dissimilar to Deepak’s synchronicity, which he enumerated thusly: “(1) Synchronicity is a conspiracy of improbabilities (the events break the boundaries of statistical probability). (2) The improbable events conspiring to create the synchronistic event are acausally related to each other. (3) Synchronistic events are orchestrated in the non-local domain. … (9) Synchronistic events are messages from our non local self and are clues to the essential unity of our inner world of thoughts, feelings, memories, fantasies, desires, and intentions, and our outer world of space time events.” From this, and my many debates with Deepak, I take him to mean that consciousness exists separately from substance and can interact with it, the interactions governed by strong emotions like love, which can apparently act across space and time to cause effects meaningful to associated participants.



A psychologist named Michael Jawer would seem to agree in his explanation to me “that strong and underlying feelings are central to anomalous happenings.” His approach “doesn’t rely on barely-understood quantum woo,” he cautioned, “but assesses the way feelings work within our biology and physiology and the way emotions knit human beings together.” That certainly sounds reasonable, although how emotional energy could be transmitted from inside a body (or from the other side) into, say, a radio, is not clear. But I appreciated the close of his letter in which he quoted the late physicist John Wheeler: “In any field, find the strangest thing and then explore it.” 



That is precisely what the eminent Caltech physicist Kip Thorne did in the blockbuster film Interstellar, for which he was the scientific consultant. In order to save humanity from imminent extinction Matthew McConaughey’s character has to find a suitable planet by passing through a wormhole to another galaxy. In order to return, however, he must slingshot around a black hole, thereby causing a massive time dilation relative to his daughter back home on Earth (one hour near the black hole equals seven years on Earth), such that by the time he returns she is much older than he. In the interim, in order to get the humans off Earth he needs to transmit information to his now adult scientist daughter on quantum fluctuations from the singularity inside of the black hole. To do so he uses an extra-dimensional “tesseract” in which time appears as a spatial dimension that includes portals into the daughter’s childhood bedroom at a moment when (earlier in the film) she thought she experienced ghosts and poltergeists, which turned out to be her father from the future reaching back in time through extra-dimensions via gravitational waves (which he uses to send the critical data via Morse code dots and dashes on the second hand of the watch he left her). It’s a farfetched plot, but according to Thorne in his companion book to the film, it’s all grounded in natural law and forces.



This is another way of saying—as I have often—that there is no such thing as the supernatural or the paranormal. There is just the natural and the normal and mysteries we have yet to solve with natural and normal explanations. If it turns out, say, that Walter exists in a 5th dimensional tesseract and is using gravitational waves to turn on his old radio for his granddaughter, that would be fully explicable by physical laws and forces as we understand them. It would not be ESP or Psi or anything of the paranormal or supernatural sort; it would just be a deeper understanding of physics.



The same applies to God. As I’ve also said (in what I facetiously call Shermer’s Last Law), “any sufficiently advanced extra-terrestrial intelligence is indistinguishable from God.” By this I mean that if we ever did encounter an ETI the chances are that they would be vastly far ahead of us on a technological time scale, given the odds against another intelligent species evolving at precisely the same rate as us on another planet. At the rate of change today we have advanced more in the past century than in all previous centuries combined. Think of the progress in computing that has been made in just the last 50 years, and then imagine where we will be in, say, 50,000 years or 50 million years, and we get some sense of just how far advanced an ETI could be. The intelligent beings who created the wormhole in Kip Thorne’s fictional universe would almost assuredly seem to us as Gods if we did not understand the science and technologies they used. Imagine an ETI millions of years more advanced than us who could engineer the creation of planets and stars by manipulating clouds of interstellar gas, or even create new universes out of collapsing black holes. If that’s not God-like I don’t know what is, but it’s just advanced science and technology and nothing more.



Until such time when science can explain even the most spectacularly unlikely events, what should we do with such stories? Enjoy them. Appreciate their emotional significance. But we do not need to fill in the explanatory gaps with gods or any such preternatural forces. We can’t explain everything, and it’s always okay to say “I don’t know” and leave it at that until a natural explanation presents itself. Until then, revel in the mystery and drink in the unknown. It is where science and wonder meet.

 •  0 comments  •  flag
Share on Twitter
Published on May 12, 2018 12:00

May 1, 2018

You Kant Be Serious

Utilitarianism and its discontents

Scientific American (cover)


Would you cut off your own leg if it was the only way to save another person’s life? Would you torture someone if you thought it would result in information that would prevent a bomb from exploding and killing hundreds of people? Would you politically oppress a people for a limited time if it increased the overall well-being of the citizenry? If you answered in the affirmative to these questions, then you might be a utilitarian, the moral system founded by English philosopher Jeremy Bentham (1748–1832) and encapsulated in the principle of “the greatest good for the greatest number.”



Modern utilitarianism is instantiated in the famous trolley thought experiment: You are standing next to a fork in a trolley track and a switch to divert a trolley car that is about to kill five workers unless you throw the switch and divert the trolley down a side track where it will kill one worker. Most people say that they would throw the switch—kill one to save five. The problem with utilitarianism is evidenced in another thought experiment: You are a physician with five dying patients and one healthy person in the waiting room. Would you harvest the organs of the one to save the five? If you answered yes, you might be a psychopathic murderer.



In a paper published online in December 2017 in the journal Psychological Review entitled “Beyond Sacrificial Harm,” University of Oxford scholars Guy Kahane, Jim A. C. Everett and their colleagues aim to rehabilitate the dark side of utilitarianism by separating its two dimensions: (1) “instrumental harm,” in which it is permissible to sacrifice the few to benefit the many, and (2) “impartial beneficence,” in which one would agree that “it is morally wrong to keep money that one doesn’t really need if one can donate it to causes that provide effective help to those who will benefit a great deal.” You can find out what type you are by answering the nine questions in the authors’ Oxford Utilitarianism Scale. I scored a 17 out of a possible 63, which was at the time described as meaning “You’re not very utilitarian at all. You Kant be convinced that maximising happiness is all that matters.”



The cheeky reference to Immanuel Kant sets up a counter to utilitarianism in the form of the German philosopher’s “categorical imperative,” in which we can determine right and wrong by asking if we would want to universalize an act. For example, lying in even limited cases is wrong because we would not want to universalize it into lying in all instances, which would destroy all personal relations and social contracts. In the physician scenario, we would not want to live in a world in which you could be plucked off the street at any moment and sacrificed in the name of someone’s idea of a collective good. Historically the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures and accidents—better to incinerate the few to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews:“Aryan” Germans; Tutsi:Hutu), the justification of genocidal murderers.



Yet if you live in Syria and a band of ISIS thugs knocks on your door demanding to know if you are hiding any homosexuals they can murder in the mistaken belief that this fulfills the word of God—and you are—few moralists would object to your lying to save them.



In this case, both utilitarianism and Kantian ethics are trumped by natural-rights theory, which dictates that you are born with the right to life and liberty of both body and mind, rights that must not be violated, not even to serve the greater good or to fulfill a universal rule. This is why, in particular, we have a Bill of Rights to protect us from the tyranny of the majority and why, in general, moral progress has been the result of the idea that individual sentient beings have natural rights that override the



moral claims of groups, tribes, races, nations and religions. Still, if we can decouple the sacrificial side of utilitarianism from its more beneficent prescriptions, moral progress may gain some momentum. Better still would be the inculcation into all our moral considerations of beneficence as an internal good rather than an ethical calculation. Be good for goodness’ sake.

 •  0 comments  •  flag
Share on Twitter
Published on May 01, 2018 12:00

April 29, 2018

Moral Philosophy and its Discontents


A response to Massimo Pigliucci’s critique of my Scientific American column on utilitarianism, deontology, and rights. (Illustration above by Izhar Cohen.)



My May 2018 column in Scientific American was titled “You Kant be Serious: Utilitarianism and its Discontents”, a cheeky nod to the German philosopher that I gleaned from the creators of the Oxford Utilitarianism Scale, whose official description for those of us who score low on the scale read: “You’re not very utilitarian at all. You Kant be convinced that maximizing happiness is all that matters.” The online version of my column carries the title (which I have no control over): “Does the Philosophy of ‘the Greatest Good for the Greatest Number’ Have Any Merit?” The answer by any reasonable person would be “of course it does!” And I’m a reasonable person, so what’s all the fuss about? Why was I jumped on by professional philosophers on social media, such as Justin Weinberg of the University of South Carolina on Twitter @DailyNousEditor, who fired a fusillade of tweets, starting with this broadside:




Disappointing that @sciam is contributing to our era's ever-frequent disrespect of expertise by publishing this ill-informed & confused @michaelshermer column on moral philosophy. (1/12) https://t.co/ETDQoHGF5s


— Daily Nous (@DailyNousEditor) April 17, 2018





I sent a private email to Justin inviting him to write a letter to the editor of Scientific American that I could then respond to—given that Twitter may not be the best medium for a discussion of important philosophical issues—but I never received a reply.



Social media responses were following by a critical review by the noted scientist and philosopher (and fellow skeptic) Massimo Pigliucci (“Michael Shermer on utilitarianism, deontology, and ‘natural rights’” in his blog Footnotes to Plato that was 2.5 times the length of the original column. Because I respect Massimo (he and I have been friends since the mid 1990s) and I always appreciate it when people take my writings seriously enough to respond, allow me to explain what I was trying to do in this column (and all my columns) in general, address Massimo’s specific comments in particular, and then consider the larger issues in these competing ethical systems on the moral landscape.



1. Limits



For each of my @SciAm columns I try to find an interesting and important topic, considered within a larger theoretical framework, sparked by some new survey, study, article, or book, that includes my opinion (these columns are in the “Opinion” department of Scientific American), and is written in a manner engaging enough to hold the attention of busy readers. I have one page, or about 710 words, to do this. The July column was triggered by a new paper titled “Beyond Sacrificial Harm: A Two-Dimensional Model of Utilitarian Psychology”, by University of Oxford philosophers Guy Kahane, Jim A. C. Everett, Brian D. Earp, Lucius Caviola, Nadira S. Faber, Molly J. Crockett, and Julian Savulescu, published December, 2017 in the prestigious journal Psychological Review. It is a 35-page, 32,000-word in-depth, complex, scholarly article that was difficult to summarize in a few paragraphs and still meet my other column criteria. So the accusation that I am oversimplifying is necessarily true.



2. Greatest Good for Who?



Massimo objects to my use of this example of utilitarianism: “Would you politically oppress a people for a limited time if it increased the overall well-being of the citizenry?” He says this example, representing the principle of “the greatest good for the greatest number,” embodies just “one of many versions of utilitarianism, and it was immediately abandoned, by none other than John Stuart Mill,” adding that today philosophers distinguish between act utilitarianism, “where we must evaluate the morality of each act, a la Bentham,” and rule utilitarianism, “where we conform to rules that have shown overall to bring about the greatest amount of good, a la Mill.” Massimo then adds: “More generally, utilitarianism has a long history, and nowadays it is actually best thought of as a particular type of consequentialist philosophy. I could be wrong, but Shermer seems unaware of these distinctions.” In point of fact, this example comes straight from the Oxford Utilitarianism Scale (on a 7-point scale from Strongly Disagree to Strongly Agree):




If the only way to ensure the overall well-being and happiness of the people is through the use of political oppression for a short, limited period, then political oppression should be used.




So Massimo can take up the matter with Kahane, et al. of whether or not this (and the other questions on the scale) properly represent modern utilitarianism and its corresponding “greatest good” principle. And yes, I am familiar with act and rule utilitarianism, and since credentials came up a lot in these online responses, let me add that while I am not a professional philosopher, I am not philosophically naïve: I took two undergraduate philosophy courses (Intro and Ethics), studied the philosophy of science for my Ph.D. in the history of science, have taken several of the Teaching Company’s Great Courses in philosophy, read everything the highly regarded philosopher Daniel Dennett has written (and consider him both a friend and philosophical mentor), teach an honors course at Chapman University on “Evolution, Ethics, and Morality,” and wrote two related books: The Science of Good and Evil (2004) and The Moral Arc (2015).


Illustration copyright Izhar Cohen



3. Trolleyology



Massimo says that my use of the famous trolley problems as an example of utilitarian thinking “is just flat out wrong.” Again, he can take this up with Kahane, et al. as they state in the first sentence of the abstract of their paper:




Recent research has relied on trolley-type sacrificial moral dilemmas to study utilitarian versus nonutilitarian modes of moral decision-making.




And:




The main approach in this research has been to study responses to ‘sacrificial’ moral dilemmas (such as the famous ‘trolley’ scenario and its various permutations; see Foot, 1967) which present a choice between sacrificing one innocent person to save a greater number of people, or doing nothing and letting them die. In analyzing these responses and relating them to other variables, such as individual difference scores on personality measures or patterns of brain activity, researchers have tried to uncover the psychological and even neural underpinnings of the dispute between utilitarians and their opponents—such as defenders of deontological, rights-based views of the kind associated with Immanuel Kant.




What Kahane, et al. want to do is separate the sacrificial from the beneficial sides of utilitarianism, which is the focus of their paper, as they write in their discussion of trolleyology research:




Thus, although sacrificial dilemmas were an important first step in studying utilitarian decision-making, and have already yielded valuable findings about attitudes in favor of and against instrumental harm, they need to be supplemented with further tools that allow us to study utilitarian decision-making along both its dimensions….




Thus, one might argue that trolley dilemmas represent only one form of utilitarianism (sacrificial), or that utilitarians would be well advised to focus on the beneficial side of their philosophy, but it is inaccurate to simply assert that trolley problems have nothing to do with utilitarianism. But then, Massimo says he didn’t even read the Kahane, et al. paper (“so I will not comment on it”), which is too bad as that was the central focus of my column. More importantly, Kahane, et al. leave readers with an actionable conclusion that “drawing public attention to the negative side of utilitarianism—one upshot of the widespread identification of utilitarianism with sacrificial solutions to trolley dilemmas in current moral psychology—may do little for, and even get in the way of, promoting greater moral impartiality.” In the context of discussing Peter Singer’s efforts to expand the moral sphere to include other sentient animals, Kahane et al. note:




Singer’s session on effective altruism at Victoria University drew those who were excited by the idea of impartial beneficence—but also a group of outraged protestors repelled by instrumental harm. To the extent that the positive aim of utilitarianism has greater moral priority, utilitarians would be advised to downplay the negative component of their doctrine and may even find a surprisingly pliant audience in the religious population.




4. Utilitarian Psychology



Massimo is nearly apoplectic with this observation of mine in the column for which he says I veer “from simplistic to nonsensical”:




Historically, the application of a utilitarian calculus is what drove witch hunters to torch women they believed caused disease, plagues, crop failures, and accidents—better to incinerate the few in order to protect the village. More recently, the 1:5 utilitarian ratio has too readily been ratcheted up to killing one million to save five million (Jews:Germans; Tutsi:Hutu), the justification of genocidal murderers.




In response Massimo writes:




What?? No, absolutely not. Setting aside the obvious observation that utilitarianism (the philosophy) did not exist until way after the Middle Ages, no, witch hunts were the result of fear, ignorance and superstition, not of a Bentham- or Mill-style calculus. And this is the first time I heard that Hitler or the Hutu of Rwanda had articulated a utilitarian rationale for their ghastly actions. Again, they were driven by fear, ignorance, superstition, and—in the case of Nazi Germany—a cynical calculation that power could be achieved and maintained in a nation marred by economic chaos by means of the time-tested stratagem of scapegoating.




From our point of view, witch hunters and genocidal dictators were ignorant and superstitious and acted out of fear, but they certainly didn’t think of themselves that way. To understand evil we must consider the point of view of the evil doers. What were these people thinking? Of course, it is easier to target the weak and defenseless, but why were they targeting anyone in the first place? The answer may be found in what I called in The Moral Arc “the witch theory of causality”:




It is evident that most of what we think of as our medieval ancestors’ barbaric practices were based on mistaken beliefs about how the laws of nature actually operate. If you—and everyone around you including ecclesiastical and political authorities—truly believe that witches cause disease, crop failures, sickness, catastrophes, and accidents, then it is not only a rational act to burn witches, it is a moral duty.




Referencing the trolley problem, I then note how easy it is to get modern people to throw a switch to kill one in order to save five, and therefore…




We should not be surprised, then, that our medieval ancestors performed the same kind of moral calculation in the case of witches. Medieval witch-burners torched women primarily out of a utilitarian calculus—better to kill the few to save the many. Other motives were present as well, of course, including scapegoating, the settling of personal scores, revenge against enemies, property confiscation, the elimination of marginalized and powerless people, and misogyny and gender politics. But these were secondary incentives grafted on to a system already in place that was based on a faulty understanding of causality.




My focus in that chapter was on the importance of science and reason to bending the moral arc by debunking incorrect theories of causality (e.g., witches), but here let me clarify to anyone who thinks I can’t even get my centuries straight that I’m not arguing Torquemada sat down with Pope Sixtus IV to compute the greater good sacrifice of 10,000 Jews in order to save 50,000 Catholics; instead I am aiming to understand the underlying psychological forces behind witch hunts and genocides, noting that in addition to the many other motives I listed (human behavior is almost never mono-causal), the utilitarian psychology of sacrificing the few to benefit the many is a major driver. Hitler and many of his German followers appear to really have believed the “stab in the back” conspiracy theory for why Germany lost the First World War: Jews, Marxists, Bolsheviks, and other “November Criminals” defeated the country from within. Yes, anti-Semitism was already rampant throughout Europe for centuries (note Martin Luther’s 1543 book The Jews and Their Lies), but from 1933 to 1945 that prejudice was put into service in the utilitarian calculus that sacrificing the Jews to save the Germans would serve the greatest good for the greatest number. In my chapter on evil in The Moral Arc, here is how I considered competing ethical systems in the context of moral conflicts:




Moral conflicts may also arise between prescriptions (what we ought to do) that bring rewards for action (pride from within, praise from without) and proscriptions (what we ought not to do) that bring punishments for violations (shame from within, shunning from without). (Eight of the Ten Commandments in the Decalogue, for example, are proscriptions.) As in the limbic system with it’s neural networks for emotions, approach-avoidance moral conflicts have neural circuitry called the behavioral activation system (BAS) and the behavioral inhibition system (BIS) that drive an organism forward or back, as in the case of the rat vacillating between approaching and avoiding the goal region…. These activation and inhibition systems can be measured in experimental settings in which subjects are presented with different scenarios in which they then offer their moral judgment (giving money to a homeless person as prescriptive vs. wearing a sexually suggestive dress to a funeral as proscriptive).




So, for example, under such conditions researchers have found that the BAS is affiliated with prescriptions but not proscriptions, whereas the BIS is affiliated with proscriptions but not prescriptions. I then demonstrate how certain emotions, such as disgust, can drive an organism away from a noxious stimulus because in the environment of our evolutionary ancestry noxiousness was an informational cue that a stimulus could kill you through poisoning (tainted food), or disease (through fecal matter, vomit, and other bodily effluvia). By contrast, anger drives an organism toward an offensive stimulus, such as another organism that attacks it. In an approach-avoidance system in a human social context, if you believe that Jews (or blacks, natives, homosexuals, Tutsis, etc.) are bacilli poisoning your tribe or nation, you naturally avoid them with disgust as you would any noxious stimulus; by contrast, if you believe that Jews (or blacks, natives, homosexuals, Tutsis, etc.) are dangerous enemies attacking your tribe or nation, you naturally approach them with anger as you would any assaulter.



This approach-avoidance conflict model in moral dilemmas shine a different light on such classic philosophical dilemmas as pitting a deontological (duty- or rule-bound) principle such as the prohibition against murder, against a utilitarian (greatest good) principle such as the trolley experiment where most people agree that it is acceptable to sacrifice one person in order to save five. Which is right? Thou shalt not kill, or thou shalt kill one to save five? Such conflicts cause much cognitive dissonance and vacillation—as in the approach-avoidance scenario—and moral philosophers have many work-arounds, as in the distinction between act utilitarianism and rule utilitarianism, the latter of which is further refined into Weak Rule Utilitarianism (WRU), which degrades into act utilitarianism when enough exceptions to the rule are made (it’s wrong to lie, except…) and the rules have sub-rules and sub-sub-rules, and Strong Rule Utilitarianism (SRU), which asserts that moral rules should be obeyed at all places and times, unless of course it’s a Nazi at the door demanding to know the whereabouts of Anne Frank.



To this end Massimo cites a 2010 paper by Helga Varden in the Journal of Social Philosophy, aptly titled “Kant and Lying to the Murderer at the Door…One More Time: Kant’s Legal Philosophy and Lies to Murderers and Nazis”, in which she argues that the Nazis “did not represent a public authority on Kant’s view and consequently there is no duty to abstain from lying to Nazis.” There is much more to her analysis of Kant, but it seems to me that in this example lying to Nazis is both a utilitarian/consequentialist decision because it would result in the death of an innocent, and a rule/rights decision that qualifies why we should care about the innocent in the first place: because, say, Kant’s rule about never treating people as an ends to a mean but as an ends in and of themselves, or that all people have a right to their own life.



5. Rights



This brings me to the final point on rights, which Massimo calls “true nonsense”, quoting Bentham’s famous assessment that rights are nonsense and natural rights “nonsense on stilts.” Massimo is here talking about finding rights in nature, but he misses the foundation of natural rights in his own throw-away line, “Yeah, we all prefer to be alive rather than dead, other things being equal.” And we all prefer to be free rather than enslaved, rich rather than poor, healthy rather than sick, safe rather than endangered, happy rather than sad, and the rest that comes with being alive. Here is how I open Chapter 1 of The Moral Arc:




Morality involves how we think and act toward other moral agents in terms of whether our thoughts and actions are right or wrong with regard to their survival and flourishing. By survival I mean the instinct to live, and by flourishing I mean having adequate sustenance, safety, shelter, bonding and social relations for physical and mental health. Any organism subject to natural selection—which includes all organisms on this planet and most likely on any other planet as well—will by necessity have this drive to survive and flourish, for if they didn’t they would not live long enough to reproduce and would therefore no longer be subject to natural selection.




Thus, I argue, the survival and flourishing of sentient beings is my moral starting point, and it is grounded in principles that are themselves based on nature’s laws and on human nature—principles that can be tested in both the laboratory and in the real world. I emphasize the individual because it is individual sentient beings who perceive, emote, respond, love, feel, and suffer—not populations, races, genders, groups, or nations. In fact, the Rights Revolutions were grounded on the freedom and autonomy of persons, not groups. Individuals vote, not races or genders. Individuals want to be treated equally, not races. Rights protect individuals, not groups; in fact, most rights (such as those enumerated in the Bill of Rights of the U.S. Constitution), protect individuals from being discriminated against as members of a group. The singular and separate organism is to biology and society what the atom is to physics—a fundamental unit of nature.



Although moral truths are not measurable in the same sense as physical phenomena—such as the mass of a particle or the gravitational force of a star—there are abstract Platonic truths that most scientists agree exist, such as those in mathematics, a point made by the Harvard psychologist Steven Pinker in a 2008 article in the New York Times magazine (“The Moral Instinct”):




[W]e are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.




Take cooperation. Over billions of years of natural history and thousands of years of human history, there has been an increasing tendency toward the playing of cooperative “nonzero” games between organisms. This tendency has allowed more nonzero gamers to survive. Thus, natural selection favored those who cooperated by playing nonzero games, thereby passing on their genes for cooperative behavior. In time, reasoning moral agents would conclude that both should cooperate toward mutual benefit rather than compete to either a zero-sum outcome in which one gains and the other loses, or both lose in a defection cascade. Pinker draws out the implications for moral realism:




If I appeal to you to do something that affects me then I can’t do it in a way that privileges my interests over yours if I want you to take me seriously. I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.




From here we can build an ethical system based on human nature and natural rights, by which I mean rights that are universal and inalienable and thus not contingent only upon the laws and customs of a particular culture or government. It is in this sense that I am a moral realist. I believe that there are real moral values. Abraham Lincoln, for example, was a moral realist when he famously said:




If slavery is not wrong, then nothing is wrong.




Since I wrote a book about the Holocaust, Denying History, I would add:




If the Holocaust is not wrong, then nothing is wrong.




How do we know that slavery and the Holocaust are wrong—really wrong? Really as in reality, as in the nature of things. Since I’m grounding morality in science let’s start with the most basic of sciences, the physical sciences. It is my hypothesis that in the same way that Galileo and Newton discovered physical laws and principles about the natural world that really are out there, so too have social scientists discovered moral laws and principles about human nature and society that really do exist. Where? In our nature.



Is there anyone (other than slave holders and Nazis) who would argue that slavery and the Holocaust are not really wrong, absolutely wrong, objectively wrong, naturally wrong?

 •  0 comments  •  flag
Share on Twitter
Published on April 29, 2018 18:02

April 1, 2018

Silent No More

The rise of the atheists

Scientific American (cover)


In recent years much has been written about the rise of the “nones”—people who check the box for “none” on surveys of religious affiliation. A 2013 Harris Poll of 2,250 American adults, for example, found that 23 percent of all Americans have forsaken religion altogether. A 2015 Pew Research Center poll reported that 34 to 36 percent of millennials (those born after 1980) are nones and corroborated the 23 percent figure, adding that this was a dramatic increase from 2007, when only 16 percent of Americans said they were affiliated with no religion. In raw numbers, this translates to an increase from 36.6 million to 55.8 million nones. Though lagging far behind the 71 percent of Americans who identified as Christian in the Pew poll, they are still a significant voting block, far larger than Jews (4.7 million), Muslims (2.2 million) and Buddhists (1.7 million) combined (8.6 million) and comparable to politically powerful Christian sects such as Evangelical (25.4 percent) and Catholic (20.8 percent).



This shift away from the dominance of any one religion is good for a secular society whose government is structured to discourage catch basins of power from building up and spilling over into people’s private lives. But it is important to note that these nones are not necessarily atheists. Many have moved from mainstream religions into New Age spiritual movements, as evidenced in a 2017 Pew poll that found an increase from 19 percent in 2012 to 27 percent in 2017 of those who reported being “spiritual but not religious.” Among this cohort, only 37 percent described their religious identity as atheist, agnostic or “nothing in particular.”



Even among atheists and agnostics, belief in things usually associated with religious faith can worm its way through fissures in the materialist dam. A 2014 survey conducted by the Austin Institute for the Study of Family and Culture on 15,738 Americans, for example, found that of the 13.2 percent who called themselves atheist or agnostic, 32 percent answered in the affirmative to the question “Do you think there is life, or some sort of conscious existence, after death?” Huh? Even more incongruent, 6 percent of these atheists and agnostics also said that they believed in the bodily resurrection of the dead. You know, like Jesus.



What’s going on here? The surveys didn’t ask, but I strongly suspect a lot of these nonbelievers adopt either New Age notions of the continuation of consciousness without brains via some kind of “morphic resonance” or quantum field (or some such) or are holding out hope that science will soon master cloning, cryonics, mind uploading or the transhumanist ability to morph us into cyber-human hybrids. As I explicate in my book Heavens on Earth, I’m skeptical of all these ideas, but I understand the pull. And that gravitational well will grow ever deeper as science progresses in these areas—and especially if the number of atheists increases.



In a paper in the January 2018 issue of the journal Social Psychological and Personality Science entitled “How Many Atheists Are There?”, Will M. Gervais and Maxine B. Najle, both psychologists at the University of Kentucky, contend that there may be far more atheists than pollsters report because “social pressures favoring religiosity, coupled with stigma against religious disbelief…, might cause people who privately disbelieve in God to nonetheless self-present as believers, even in anonymous questionnaires.”



To work around this problem of self-reported data, the psychologists employed what is called an unmatched count technique, which has been previously validated for estimating the size of other underreported cohorts, such as the LGBTQ community. They contracted with YouGov to conduct two surveys of 2,000 American adults each, for a total of 4,000 subjects, asking participants to indicate how many innocuous versus sensitive statements on a list were true for them. The researchers then applied a Bayesian probability estimation to compare their results with similar Gallup and Pew polls of 2,000 American adults each. From this analysis, they estimated, with 93 percent certainty, that somewhere between 17 and 35 percent of Americans are atheists, with a “most credible indirect estimate” of 26 percent.



If true, this means that there are more than 64 million American atheists, a staggering number that no politician can afford to ignore. Moreover, if these trends continue, we should be thinking about the deeper implications for how people will find meaning as the traditional source of it wanes in influence. And we should continue working on grounding our morals and values on viable secular sources such as reason and science.


The post Silent No More appeared first on Michael Shermer.

 •  0 comments  •  flag
Share on Twitter
Published on April 01, 2018 12:00

March 1, 2018

Factiness

Are we living in a post-truth world?

Scientific American (cover)


In 2005 the American Dialect Society’s word of the year was “truthiness,” popularized by Stephen Colbert on his news show satire The Colbert Report, meaning “the truth we want to exist.” In 2016 the Oxford Dictionaries nominated as its word of the year “post-truth,” which it characterized as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” In 2017 “fake news” increased in usage by 365 percent, earning the top spot on the “word of the year shortlist” of the Collins English Dictionary, which defined it as “false, often sensational, information disseminated under the guise of news reporting.”



Are we living in a post-truth world of truthiness, fake news and alternative facts? Has all the progress we have made since the scientific revolution in understanding the world and ourselves been obliterated by a fusillade of social media postings and tweets? No. As Harvard University psychologist Steven Pinker observes in his resplendent new book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (Viking, 2018), “mendacity, truth-shading, conspiracy theories, extraordinary popular delusions, and the madness of crowds are as old as our species, but so is the conviction that some ideas are right and others are wrong.”



Even as pundits pronounced the end of veracity and politicians played loose with the truth, the competitive marketplace of ideas stepped up with a new tool of the Internet age: real-time fact-checking. As politicos spin-doctored reality in speeches, factcheckers at Snopes.com, FactCheck.org, and OpenSecrets.org rated them on their verisimilitude, with PolitiFact.com waggishly ranking statements as True, Mostly True, Half True, Mostly False, False, and Pants on Fire. Political fact-checking has even become clickbait (runner-up for the Oxford Dictionaries’ 2014 word of the year), as PolitiFact’s editor Angie Drobnic Holan explained in a 2015 article: “Journalists regularly tell me their media organizations have started highlighting fact-checking in their reporting because so many people click on fact-checking stories after a debate or high-profile news event.”



Far from lurching backward, Pinker notes, today’s fact-checking ethic “would have served us well in earlier decades when false rumors regularly set off pogroms, riots, lynchings, and wars (including the Spanish-American War in 1898, the escalation of the Vietnam War in 1964, the Iraq invasion of 2003, and many others).” And contrary to our medieval ancestors, he says, “few influential people today believe in werewolves, unicorns, witches, alchemy, astrology, bloodletting, miasmas, animal sacrifice, the divine right of kings, or supernatural omens in rainbows and eclipses.”



Ours is called the Age of Science for a reason, and that reason is reason itself, which in recent decades has come under fire by cognitive psychologists and behavioral economists who assert that humans are irrational by nature and by postmodernists who aver that reason is a hegemonic weapon of patriarchal oppression. Balderdash! Call it “factiness,” the quality of seeming to be factual when it is not. All such declarations are self-refuting, inasmuch as “if humans were incapable of rationality, we could never have discovered the ways in which they were irrational, because we would have no benchmark of rationality against which to assess human judgment, and no way to carry out the assessment,” Pinker explains. “The human brain is capable of reason, given the right circumstances; the problem is to identify those circumstances and put them more firmly in place.”



Despite the backfire effect, in which people double down on their core beliefs when confronted with contrary facts to reduce cognitive dissonance, an “affective tipping point” may be reached when the counterevidence is overwhelming and especially when the contrary belief becomes accepted by others in one’s tribe. This process is helped along by “debiasing” programs in which people are introduced to the numerous cognitive biases that plague our species, such as the confirmation bias and the availability heuristic, and the many ways not to argue: appeals to authority, circular reasoning, ad hominem and especially ad Hitlerem. Teaching students to think critically about issues by having them discuss and debate all sides, especially articulating their own and another’s position is essential, as is asking, “What would it take for you to change your mind?” This is an effective thinking tool employed by Portland State University philosopher Peter Boghossian.



“However long it takes,” Pinker concludes, “we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth.” That’s a fact.


The post Factiness appeared first on Michael Shermer.

 •  0 comments  •  flag
Share on Twitter
Published on March 01, 2018 12:00

Factiness: Are we living in a post-truth world?

Scientific American (cover)


In 2005 the American Dialect Society’s word of the year was “truthiness,” popularized by Stephen Colbert on his news show satire The Colbert Report, meaning “the truth we want to exist.” In 2016 the Oxford Dictionaries nominated as its word of the year “post-truth,” which it characterized as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” In 2017 “fake news” increased in usage by 365 percent, earning the top spot on the “word of the year shortlist” of the Collins English Dictionary, which defined it as “false, often sensational, information disseminated under the guise of news reporting.”



Are we living in a post-truth world of truthiness, fake news and alternative facts? Has all the progress we have made since the scientific revolution in understanding the world and ourselves been obliterated by a fusillade of social media postings and tweets? No. As Harvard University psychologist Steven Pinker observes in his resplendent new book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (Viking, 2018), “mendacity, truth-shading, conspiracy theories, extraordinary popular delusions, and the madness of crowds are as old as our species, but so is the conviction that some ideas are right and others are wrong.”



Even as pundits pronounced the end of veracity and politicians played loose with the truth, the competitive marketplace of ideas stepped up with a new tool of the Internet age: real-time fact-checking. As politicos spin-doctored reality in speeches, factcheckers at Snopes.com, FactCheck.org, and OpenSecrets.org rated them on their verisimilitude, with PolitiFact.com waggishly ranking statements as True, Mostly True, Half True, Mostly False, False, and Pants on Fire. Political fact-checking has even become clickbait (runner-up for the Oxford Dictionaries’ 2014 word of the year), as PolitiFact’s editor Angie Drobnic Holan explained in a 2015 article: “Journalists regularly tell me their media organizations have started highlighting fact-checking in their reporting because so many people click on fact-checking stories after a debate or high-profile news event.”



Far from lurching backward, Pinker notes, today’s fact-checking ethic “would have served us well in earlier decades when false rumors regularly set off pogroms, riots, lynchings, and wars (including the Spanish-American War in 1898, the escalation of the Vietnam War in 1964, the Iraq invasion of 2003, and many others).” And contrary to our medieval ancestors, he says, “few influential people today believe in werewolves, unicorns, witches, alchemy, astrology, bloodletting, miasmas, animal sacrifice, the divine right of kings, or supernatural omens in rainbows and eclipses.”



Ours is called the Age of Science for a reason, and that reason is reason itself, which in recent decades has come under fire by cognitive psychologists and behavioral economists who assert that humans are irrational by nature and by postmodernists who aver that reason is a hegemonic weapon of patriarchal oppression. Balderdash! Call it “factiness,” the quality of seeming to be factual when it is not. All such declarations are self-refuting, inasmuch as “if humans were incapable of rationality, we could never have discovered the ways in which they were irrational, because we would have no benchmark of rationality against which to assess human judgment, and no way to carry out the assessment,” Pinker explains. “The human brain is capable of reason, given the right circumstances; the problem is to identify those circumstances and put them more firmly in place.”



Despite the backfire effect, in which people double down on their core beliefs when confronted with contrary facts to reduce cognitive dissonance, an “affective tipping point” may be reached when the counterevidence is overwhelming and especially when the contrary belief becomes accepted by others in one’s tribe. This process is helped along by “debiasing” programs in which people are introduced to the numerous cognitive biases that plague our species, such as the confirmation bias and the availability heuristic, and the many ways not to argue: appeals to authority, circular reasoning, ad hominem and especially ad Hitlerem. Teaching students to think critically about issues by having them discuss and debate all sides, especially articulating their own and another’s position is essential, as is asking, “What would it take for you to change your mind?” This is an effective thinking tool employed by Portland State University philosopher Peter Boghossian.



“However long it takes,” Pinker concludes, “we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth.” That’s a fact.

 •  0 comments  •  flag
Share on Twitter
Published on March 01, 2018 12:00

February 28, 2018

Realizing Rawls’ Just Society

Though declinists in both parties may bemoan our miserable lives, Americans are healthier, wealthier, safer and living longer than ever.



Better Than it Looks: Reasons for Optimism in an Age of Fear (book cover)


In his 1971 book A Theory of Justice, the Harvard philosopher John Rawls argued that in the “original position” of a society we are all shrouded in a “veil of ignorance” of how we will be born—male or female, black or white, rich or poor, healthy or sick, slave or free—so society should be structured in such a way that laws do not privilege any one group because we do not know which category we will ultimately find ourselves in.



Writing during a time when civil unrest over centuries of injustice was spilling out into the streets in marches and riots, Rawls’ work was as much prescriptive as it was descriptive. But 45 years later, at a 2016 speech in Athens, Greece, President Barack Obama affirmed that a Rawlsian society was becoming a reality: “If you had to choose a moment in history to be born, and you did not know ahead of time who you would be—you didn’t know whether you were going to be born into a wealthy family or a poor family, what country you’d be born in, whether you were going to be a man or a woman—if you had to choose blindly what moment you’d want to be born you’d choose now.” As Obama explained to a German audience earlier that year: “We’re fortunate to be living in the most peaceful, most prosperous, most progressive era in human history,” adding “that it’s been decades since the last war between major powers. More people live in democracies. We’re wealthier and healthier and better educated, with a global economy that has lifted up more than a billion people from extreme poverty.”



Data supporting this observation is now readily available through such sites as Hans Rosling’s Gapminder.org, Max Roser’s ourworldindata.org, and Marian Tupy’s humanprogress.org, and in books such as Steven Pinker’s Enlightenment Now (2018), Johan Norberg’s Progress (2016), my own The Moral Arc (2015), Peter Diamandis’ and Steven Kotler’s Abundance (2012), Matt Ridley’s The Rational Optimist (2011), and others. Apparently it’s not enough as pessimism is as prominent as it ever was, if not more during the recent uptick of identity politics and economic nationalism.



Thus, Gregg Easterbrook’s masterful and comprehensive exposition on why we should be optimistic in an age of pessimism, It’s Better Than it Looks, comes at a propitious moment. Easterbrook backs his sanguine perspective with copious data, and at the same time he demonstrates how a pessimistic perspective can not only lead people to despair but it can nudge voters to elect a man who growled that our economy “is always bad, down, down, down” even as it was climbing up, up, up out of the gravity well of the 2008/2009 recession. Since emotions trump information, apocalyptic political rhetoric crowds out data dumps of positive trends in the spaces of our mind’s decision tree. On average, Easterbrook shows in this rich narrative packed with statistics, while the declinists were bemoaning our miserable lives during the last election, “at no juncture in American history were people better off than they were in 2016: living standards, per-​capita income, buying power, health, safety, liberty, and longevity were at their highest, while women, minorities, and gays were free in ways they’d never been before. There had been no juncture in history at which the typical member of the global population was better off either.”



A potent counter to today’s unwarranted pessimism, the author claims, is not just the evidence that can be seen (rising employment, wages, wealth, health, lifespans and so on) but what has not been seen. Granaries, for instance, are not empty: The many predictions made since the 1960s that billions would die of starvation have not come true. “Instead, by 2015, the United Nations reported global malnutrition had declined to the lowest level in history. Nearly all malnutrition that persists is caused by distribution failures or by government corruption, not by lack of supply.” In fact, obesity is rapidly becoming a global problem.



Similarly, even though there are occasional panics, “resources have not been depleted despite the incredible proliferation of people, vehicles, aircraft, and construction.” Instead of oil and gas running out by the year 2000, as some in the 1970s predicted, both “are in worldwide oversupply” along with minerals and ores. Likewise, there are no runaway plagues. “Unstoppable outbreaks of super-viruses and mutations were said to menace a growing world; instead, nearly all disease rates are in decline, including the rates of most cancers.” Western nations are also no longer choking on pollution. Smog in major cities like Los Angeles, for example, is in free fall as measured by the number of air-quality alerts. Sulfur dioxide, the main source of acid rain, is down by 81% in the U.S. since 1990, and forests in Appalachia “are in the best condition they have been in since the eighteenth century.”



In America as well as the rest of the world, crime and violence are getting less, not more, frequent, Mr. Easterbrook points out. Homicide rates have plummeted since their post-World War II high in 1993, while “the frequency and intensity of combat have gone down worldwide.” And despite worries about rising authoritarianism, the dictators aren’t winning. In the 1980s dictators ravaged countries on nearly every continent; today, the Kim family’s lock on North Korea stands out as an aberration.



Easterbrook’s aim in this important book is to prove that life is more auspicious than most people believe, to show why life did not deteriorate as predicted, to identify what we’ve been doing right so we can do more of it, and consider what we can do about the still pressing problems we face, most notably the “impossible” challenges of inequality and climate change, along with others that social commentators kvetch about: marriage, social security, health care, artificial intelligence, poverty, nuclear weapons, and others, all soluble if we make the effort. Easterbrook reminds us that while it is easy to see (and remember) bad things that happen, it is impossible to see what hasn’t happened (as predicted in previous decades): resources are not exhausted, there are no runaway plagues, Western nations are not choking on pollution, the economy keeps functioning, crime and war are not getting worse, and dictators (what few are left) are not winning.



The cause of this salubrious turn of events in human history was the result of human action and problem solving, not historical tides on which we helplessly ride. “History is not deterministic, teleological, or controlled in any manner,” Easterbrook concludes. Instead, each of the many areas of progress that he documents were the result of individuals and organizations—both private and public—deciding to solve particular problems, as President Franklin Roosevelt prophesized in 1938 when the world was much darker than it is today: “We observe a world of great opportunities disguised as insoluble problems.” It is a fitting quote Easterbrook notes with some irony, since it was early 20th century progressives who were the optimists who envisioned an America the Beautiful in which “alabaster cities gleam undimmed by human tears.” Today’s progressives take an opposite tack of gloomy pessimism, matched by the Right’s nostalgia for the “Good Ole Days”—you know, when life was Hobbesian nasty, brutish, and short. Easterbrook wants to make optimism intellectually respectable again, and he has done so with cogent arguments and bountiful numbers, showing that “history has an arrow, and the arrow of history points forever upward.”

 •  0 comments  •  flag
Share on Twitter
Published on February 28, 2018 12:00

February 23, 2018

Reason (and Science) for Hope


A review of Steven Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (Viking, 2018, ISBN 978-0525427575). This review appeared in Science in February 2018.



Immortality: The Quest to Live Forever and How it Drives Civilization (book cover)


How much better can you imagine the world being than it is right now? How much worse can you imagine the world being than it is right now?



For most of us, it is easier to imagine the world going to hell in a handbasket than it is to picture some rosy future, which explains why there are far more dystopian and apocalyptic books and films than there are utopian. We can readily conjure up such incremental improvements as increased Internet bandwidth, improved automobile navigation systems, or another year added to our average lifespan. But what really gets imaginations roiling are the images of nuclear Armageddon, AI robots run amok, or terrorists mowing down pedestrians in trucks.



The reason for this asymmetry is an evolved feature of human cognition called the negativity bias, explored in depth by the Harvard psychologist and linguist Steven Pinker in his magisterial new book Enlightenment Now, an estimable sequel to his The Better Angels of Our Nature, which Bill Gates called “the most inspiring book I’ve ever read.” This is not hyperbole. Enlightenment Now is the most uplifting work of science I’ve ever read. Pinker begins with the Enlightenment because the scientists and scholars who drove that movement took the methods of reason and science developed in the Scientific Revolution and applied them to solving problems in all fields of knowledge: physical, biological, and social. “Dare to know” was Immanuel Kant’s oft-quoted one-line summary of the age he helped launch, and with knowledge comes power over nature, starting with the Second Law of Thermodynamics and entropy, which Pinker fingers as the cause of our natural-born pessimism. In the world in which our ancestors evolved their cognition and emotions that we inherited, entropy dictates that there are more ways for things to go bad than good, so our modern psychology is tuned to a world that was more dangerous in our evolutionary past than it is today. Your life depends on all systems working, so the good news of experiencing another pain-free day goes unnoticed, whereas painful catastrophic failures can spell the end of your existence, so we focus on the latter more than the former. “The Law of Entropy is widely acknowledged in everyday life in sayings such as ‘Things fall apart,’ ‘Rust never sleeps,’ ‘Shit happens,’ ‘Whatever can go wrong will go wrong,’” Pinker writes (p. 16). But instead of interpreting misfortunes like accidents, plagues, famine, and disease as the result of angry gods, vengeful demons, or bewitching women like our medieval ancestors did, we know that they’re just entropy taking its course. We don’t need an explanation for poverty, for example, because that is what you get if you do nothing to manipulate your environment to produce wealth. The application of knowledge to solving problems of survival that result from entropy is what propelled us to unimaginable levels of progress, which Pinker documents in 75 charts and graphs and thousands of statistics in 14 chapters covering life, health, sustenance, wealth, inequality, the environment, peace, safety, terrorism, democracy, equal rights, knowledge, quality of life, and happiness.



On average, since the time of the Enlightenment more people in more places more of the time live longer, healthier, happier, and more meaningful lives filled with enriching works of art, music, literature, science, technology, and medicine, not to mention food, drink, clothes, cars, houses, international travel, and instant and free access to all the world’s knowledge. Exceptions are no counter to Pinker’s massive data set. Follow the trend lines, not the headlines. “War between countries is obsolescent, and war within countries is absent from five-sixths of the world’s surface,” (p. 322) Pinker notes in just one of dozens of areas in which life has improved. “Genocides, once common, have become rare. In most times and places, homicides kill far more people than wars, and homicide rates have been falling as well.” (p. 323). And we are safer than ever. “Over the course of the 20th century, Americans became 96 percent less likely to be killed in a car accident, 88 percent less likely to be mowed down on the sidewalk, 99 percent less likely to die in a plane crash, 59 percent less likely to fall to their deaths, 92 percent less likely to die by fire, 90 percent less likely to drown, 92 percent less likely to be asphyxiated, and 95 percent less likely to be killed on the job.” (p. 323)



Each area of progress has specific causes that Pinker carefully identifies, but he attributes the overall progressive picture to Enlightenment humanism, the worldview that encompasses science and reason. It is a heroic journey, Pinker concludes with rhetorical flair. “It is glorious. It is uplifting. It is even, I daresay, spiritual.” How? “We are born into a pitiless universe, facing steep odds against life-enabling order and in constant jeopardy of falling apart.” Nevertheless, our species has faced entropy like no other. “Yet human nature has also been blessed with resources that open a space for a kind of redemption. We are endowed with the power to combine ideas recursively, to have thoughts about our thoughts. We have an instinct for language, allowing us to share the fruits of our experience and ingenuity. We are deepened with the capacity for sympathy—for pity, imagination, compassion, commiseration.” (p. 452) This is our story, not vouchsafed to any one tribe but to all humanity, “to any sentient creature with the power of reason and the urge to persist in its being. For it requires only the convictions that life is better than death, health is better than sickness, abundance is better than want, freedom is better than coercion, happiness is better than suffering, and knowledge is better than superstition and ignorance.” (p. 453)



That’s a fact that offers us reason (and science) for hope.

 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2018 12:00

February 6, 2018

Finding Freedom in a Determined Universe

Foreword to Free Will Explained: How Science and Philosophy Converged to Produce a Beautiful Illusion, by Dan Barker (Sterling. 2018. ISBN 9781454927358).



Free Will Explained: How Science and Philosophy Converged to Produce a Beautiful Illusion (book cover)

In 1985, the physiologist Benjamin Libet conducted a series of experiments that involved taking EEG readings of subjects’ brains engaged in a task that required them to press a button at random intervals whenever they felt like it during the session. Results: several seconds before the “decision” was consciously made by the subject, the brain’s motor cortex was activated.1 The neuroscientist John-Dylan Haynes employed fMRI brain scans in a 2011 study in which subjects inside the scanner were instructed to press one of two buttons whenever they wanted while observing a series of random letters. The subjects were told to verbally report which letter was on the screen when they “decided” to press the button. Results: the time between brain activation and conscious awareness of a “choice” was several seconds, and in some cases a full seven seconds.2


In these studies, and others, scientists measuring subjects’ brains know which decision they would make before the subjects themselves know it! That is spooky, and if these results don’t bother you then you’re not thinking hard enough about them. What they imply is that we are not free to choose in the way we think we are. We feel free, but that’s just what our conscious self believes because it doesn’t know about the inputs feeding into it from below that have already made the choice. As the neuroscientist Sam Harris articulated it in his widely-read book Free Will, “Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control. We do not have the freedom we think we have.”3


The principle of determinism holds that every event in the universe has a prior cause. If all effects have causes, including human thoughts and actions, then where in the causal chain does the act of choice enter? Even if there were a Mini-Me up there calling the shots, his little brain would have to be just as determined as my big brain, so for Mini-Me to have free will he would have to have a miniMini-Me inside of him pulling his strings, and miniMini-Me would himself need an itty-bitty miniMini-Me inside of his brain…ad infinitum. And if you believe in souls, this fails in the same way as Mini-Me does. A soul inside of you pulling your strings does not grant you freedom; it just means the soul is in control. And having a soul would mean that there’s a mini-soul inside the soul directing its actions, and so forth. It would seem that if determinism is true then we do not have free will. And yet…we feel free. We feel like we make choices.


Herein lies the problem, which helps explain the results of a 2009 survey of 3,226 philosophy professors and grad students asked to weigh in on 30 different subjects of concern in their field, from a priori knowledge, aesthetic value, and God to knowledge, mind, and moral realism.4 On the topic of “free will: compatibilism, libertarianism, or no free will,” the survey found the following results:





Accept or lean toward
%




Compatibilism
59.1%


Other
14.9%


Libertarianism
13.7%


No free will
12.2%



By far, the majority of professional philosophers hold the position that free will and determinism are compatible.


Now, from a scientific perspective it shouldn’t matter how many people support one or another position. Only the quality of the evidence and arguments should matter. As Einstein said in response to a 1931 book skeptical of relativity theory titled A Hundred Authors Against Einstein, “Why one hundred? If I were wrong, one would have been enough.”5 But there is something revealing about these figures, and that is this: if the most qualified people to assess a problem are not in agreement on an answer—and the free-will/determinism problem has been around for thousands of years—it may be that it is an insoluble one. For example, is it really reasonable for the 12.2 percent of philosophers who are determinists to conclude that 59.1 percent of their professional colleagues are simply wrong in taking the compatibilist position? Isn’t it more likely that the issue comes down to language and what is meant by the terms “free will” and “determinism”?


This is what I strongly suspect, and in my book The Moral Arc I worked out how accepting a determined universe does not preclude retaining free will and moral responsibility through four compatibilist workarounds: (1) modular mind—even though a brain consists of many neural networks in which one network may make a choice that another network finds about later, they are all still operating in a single brain; (2) free won’t—vetoing competing impulses and choosing one thought or action over another); (3) choice as part of the causal net—wherein our volitional acts are part of the determined universe but are still our choices; (4) degrees of moral freedom—a range of choice options varying by degrees of complexity and the number of intervening variables.6


1. Modular Mind

If a subcortical region of my brain sends a signal to a cortical region of my brain to inform it of a preference, it is still my brain making the choice. It is still me—an autonomous volitional being—making choices, regardless of which part of me is actually making the decision. In his book Why Everyone (Else) Is a Hypocrite, the evolutionary psychologist Robert Kurzban shows how the brain evolved as a modular multitasking problem-solving organ—a Swiss Army Knife of practical tools in the old metaphor, or an app-loaded iPhone in Kurzban’s upgrade.7 There is no unified “self” that generates internally consistent and seamlessly coherent beliefs devoid of conflict. Instead, we are a collection of distinct but interacting modules that are often at odds with one another, and the decision-making process often happens unconsciously, so it seems as if choices are being made for us from we know not where. But the brain scan studies reveal the source and process of neural decision making, allowing us to build volition back into our brains. There is, after all, a Mini-Me—lots of them in fact—all of them with preferences, many of them in competition with one another, and all of them inside of a single brain.


2. Free Won’t

If we define free will as the power to do otherwise, a useful approach is to conceptualize “free will” as “free won’t”—i.e., as the power to veto one impulse in favor of another. Free won’t is the capacity to reject a particular action arising from the unconscious neural network, such that any decision to act one way instead of another way is an authentic choice. We have limitations, it’s true—we cannot just do anything we choose—but for the most part we have veto power; we have the capacity to say “no”; we can act this way instead of that way, and that is a real choice.


Support for this hypothesis can be found in a 2007 study conducted by neuroscientists Marcel Brass and Patrick Haggard, who used fMRI brain scans while subjects made choices. But at the last moment the subjects could change their minds and override their initial decision to press a button. When they chose to veto their initial decision, the scientists discovered that a specific area of the brain lit up—the left dorsal frontomedian cortex (dFMC)—an area that is normally active during decision-making behavior, especially during the intentional inhibition of a choice. Tellingly, there were no differences in the brain regions active in preparation of a voluntary action and those involved in inhibiting such actions. “Our results suggest that the human brain network for intentional action includes a control structure for self-initiated inhibition or withholding of intended actions.”8 That is free won’t.


Even Benjamin Libet himself—the instigator of this line of research that has led so many neuroscientists to abandon belief in free will—in the end came down in favor of human nature containing a volitional element: “The role of conscious free will would be, then, not to initiate a voluntary act, but rather to control whether the act takes place. We may view the unconscious initiatives for voluntary actions as ‘bubbling up’ in the brain. The conscious-will then selects which of these initiatives may go forward to an action or which ones to veto and abort.”9


What this research implies is that the neural architecture of choice can be modified by experience—in other words, training and practice—which means that in the long run, with better neuroscience and technology, we could not only teach people how to impede their maladaptive impulses to, say, eat unhealthy foods or take dangerous drugs, but also, in principle, we could train criminals to learn to veto their early and dangerous choices in order to make more socially acceptable decisions. And the choice is real in this way: regardless of which part of our brain makes our choices, they are still our choices, and even the apparently subconscious ones can be overridden by conscious effort.


3. Free Choice as Part of the Deterministic Causal Net

The enormity, intricacy, and ultimate unknowability of the causal net of the universe leads us to feel as if we are acting freely. But it is more than a feeling. As with our capacity for free won’t and consciously choosing to override desires bubbling up from the unconscious, our choices are genuine neural processes. As Daniel Dennett argues in his book Freedom Evolves,10 our ancestors made decisions to behave in ways that result in actual consequences for survival and reproduction in our evolutionary history, and this led to the evolution of a neural architecture for behavioral choice.11 Dennett argues that free will arises from a number of our characteristics of cognition, including a sense of being self-aware and aware that others are self-aware; symbolic language that allows us to communicate the fact that we are aware and self-aware; complex neural circuitry that allows for many behavioral options arising out of numerous neural impulses; a theory of mind about others that enables us to think about what they’re thinking about; and evolved moral emotions about right and wrong choices. And because we can communicate complex ideas through language, we have the power to reason about these moral choices. Out of this collection of cognitive characteristics comes free will because we can, and do, weigh the consequences of the many courses of action available to us at any given moment.


4. Moral Degrees of Freedom

A final way to understand volition in a deterministic system is through the concept of “degrees of freedom”—the range of options that an organism has as a result of its complexity and the number of intervening variables acting upon it. Insects, for example, have very few degrees of freedom and are guided mostly by fixed instincts. Reptiles and birds have more degrees of freedom enabled by modifiable instincts that are subject to environmental triggers in critical periods, with subsequent life experience allowing for learned responses to changing environments. Mammals, especially the great apes, have many more degrees of freedom through considerable neural plasticity and learning. And humans have the most degrees of freedom, with our massive cortex and our highly developed culture. Within our own species, some people—psychopaths, the brain damaged, the severely depressed, or the chemically addicted—have fewer degrees of freedom than other people, and the law adjusts for their lowered capacity for legal and moral culpability. But we still hold them accountable for their actions to the extent that they have control over their choices, especially their capacity to, say, veto their criminal impulses.


The law also recognizes degrees of freedom by distinguishing between various grades of murder, which are classified according to circumstance and intent. First-degree murder is the unlawful killing of one human being by another, with malice aforethought—that is murder, both intentional and premeditated. Second-degree murder is the unlawful killing of one human being by another, but without malice aforethought—that is, it is not intentional and premeditated. Voluntary manslaughter is the unlawful killing of one human being by another without prior intent to kill, and is committed under circumstances that would “cause a reasonable person to become emotionally or mentally disturbed,” as in a crime of passion—a situation that is sometimes called voluntary manslaughter. Involuntary manslaughter is neither deliberate nor premeditated and is reserved for fatal accidents due to negligence, for example deaths caused by drunk driving. And, as we shall see below, there are murders caused by mitigating factors such as tumors, PTSD, depression, and so on—factors that are assumed to restrict the autonomy of the accused and are thus taken into account during the sentencing phase of a trial. Finally, there are lawful killings, such as those that occur in war, as a result of self-defense, or capital punishment by a state. All of these ways in which a human life is cut short, either lawfully or unlawfully, by individuals or the state, take into account circumstances, intent, and moral degrees of freedom.12



Dan Barker’s poignantly argued and beautifully presented case for harmonic freedom and social free will gels beautifully with my own and others’ case for how we can be free in a determined universe. Barker is one of our finest minds and most effective activists for atheism, and as a former preacher and theist he knows the necessity of making the case for moral freedom as well if we are to rebut the calumny that atheists have no morality because we don’t believe in moral responsibility (if we’re not free then we can’t be held responsible for our actions). Barker’s harmonic free will, so succinctly presented in this tightly reasoned book, does not deny the scientific reality of determinism, but it does engage us as humans in the social process of being responsible for our choices while interacting with other humans, and in so doing Barker has essentially eliminated the problem of free will and determinism. There is really nothing to resolve.


As an example consider the following thought experiment. John Doe is an exceptionally moral person who is happily married to Jane doe. The chances of John ever cheating on Jane is close to zero. But the odds are not zero because John is human, so let’s say—for the sake of argument—that John has a one-night stand while on the road and Jane finds out. How does John account for his actions? Does he, pace the standard deterministic explanation for human behavior, say something like this to Jane?



Honey, my will is simply not of my own making. My thoughts and intentions emerged from background causes of which I am unaware and over which I exert no conscious control. I do not have the freedom you think I have. I could not have done otherwise…


Could John even finish the thought before the stinging slap of Jane’s hand across his face terminated the rationalization?


If free will is the power to do otherwise, both John and Jane know that, of course, he could have done otherwise, and she reminds him that should those circumstances arise again he damn well better make the right choice…or else. That act to choose to do the right thing…or the wrong thing…is what most of us mean by free will. In this sense I strongly suspect that deep down most determinists are compatibilists when it comes to actually living their lives instead of running thought experiments. And except for those extreme cases of mental illness, chemical addiction, or brain damage, we all have this type of freedom. Our choices may be part of the determined causal net of the universe, but they are still our choices that we make and we should be held accountable for them.



References





Libet, Benjamin. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” Behavior and Brain Sciences, 8, 529–66.


Haynes, J. D. 2011. “Decoding and Predicting Intentions.” Annals of the New York Academy of Sciences. 1224(1), 9–21.


Harris, Sam. 2012. Free Will. New York: Free Press, 5.


http://philpapers.org/surveys/results.pl


http://bit.ly/1Sh4YUC


Shermer, Michael. 2015. The Moral Arc. New York: Henry Holt.


Kurzban, Robert. 2012. Why Everyone (Else) Is a Hypocrite. Princeton University Press.


Brass, Marcel and Patrick Haggard. 2007. “To Do or Not to Do: The Neural Signature of Self Control.” The Journal of Neuroscience, 27(34), 9141–9145.


Libet, Benjamin. 1999. “Do We Have Free Will?” Journal of Consciousness Studies. 6(809), 47–57, 54.


Dennett, Daniel. 2003. Freedom Evolves. New York: Viking.


For a discussion of how the brain operates to make economic decisions that feel “free” to the decision maker, see: Glimcher, P. W. 2003. Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics. Cambridge: MIT Press. See also Steven Pinker’s excellent discussion on free will and determinism in: Pinker, Steven. 2002. The Blank Slate: The Modern Denial of Human Nature. New York: Viking, 175.


Scheb, John M. and John M. Scheb II. 2010. Criminal Law and Procedure. 7th edition. Cengage Learning.
 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2018 12:00

Michael Shermer's Blog

Michael Shermer
Michael Shermer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Michael Shermer's blog with rss.