Sam Harris's Blog, page 29

January 3, 2012

Everything and Nothing

image





Lawrence M. Krauss is a renowned cosmologist, popularizer of science, and director of the Origins Project at Arizona State University.  He is the author of more than 300 scientific publications and 8 books, including the bestselling The Physics of Star Trek. His interests include the early universe, the nature of dark matter, general relativity and neutrino astrophysics. He is also a friend and an advisor to my nonprofit foundation, Project Reason. Lawrence generously took time to answer a few questions about his new book, A Universe from Nothing.

***



One of the most common justifications for religious faith is the idea that the universe must have had a creator. You've just written a book alleging that a universe can arise from "nothing." What do you mean by "nothing" and how fully does your thesis contradict a belief in a Creator God?



Indeed, the question, "Why is there something rather than nothing?" which forms the subtitle of the book, is often used by the faithful as an unassailable argument that requires the existence of God, because of the famous claim, "out of nothing, nothing comes."  While the chief point of my book is to describe for the interested layperson the remarkable revolutions that have taken place in our understanding of the universe over the past 50 years—revolutions that should be celebrated as pinnacles of our intellectual experience—the second goal is to point out that this long-held theological claim is spurious. Modern science has made the something-from-nothing debate irrelevant.  It has changed completely our conception of the very words "something" and "nothing".  Empirical discoveries continue to tell us that the Universe is the way it is, whether we like it or not, and 'something' and 'nothing' are physical concepts and therefore are properly the domain of science, not theology or philosophy. (Indeed, religion and philosophy have added nothing to our understanding of these ideas in millennia.) I spend a great deal of time in the book detailing precisely how physics has changed our notions of "nothing," for example.  The old idea that nothing might involve empty space, devoid of mass or energy, or anything material, for example, has now been replaced by a boiling bubbling brew of virtual particles, popping in and out of existence in a time so short that we cannot detect them directly.  I then go on to explain how other versions of "nothing"—beyond merely empty space—including the absence of space itself, and even the absence of physical laws, can morph into "something."  Indeed, in modern parlance, "nothing" is most often unstable.  Not only can something arise from nothing, but most often the laws of physics require that to occur. 



Now, having said this, my point in the book is not to suggest that modern science is incompatible with at least the Deistic notion that perhaps there is some purpose to the Universe (even though no such purpose is manifest on the basis of any of our current knowledge, and moreover there is no logical connection between any possible "creator" and the personal God of the world's major religions, who cares about humanity's destiny).  Rather, what I find remarkable is the fact that the discoveries of modern particle physics and cosmology over the past half century allow not only a possibility that the Universe arose from nothing, but in fact make this possibility increasingly plausible.  Everything we have measured about the universe is not only consistent with a universe that came from nothing (and didn't have to turn out this way!), but in fact, all the new evidence makes this possibility ever more likely.  Darwin demonstrated how the remarkable diversity of life on Earth, and the apparent design of life, which had been claimed as evidence for a caring God, could in fact instead be arrived at by natural causes involving purely physical processes of mutation and natural selection.  I want to show something similar about the Universe.  We may never prove by science that a Creator is impossible, but, as Steven Weinberg has emphasized, science admits (and for many of us, suggests) a universe in which one is not necessary. 



I cannot hide my own intellectual bias here.  As I state in the first sentence of the book, I have never been sympathetic to the notion that creation requires a creator.  And like our late friend, Christopher Hitchens, I find the possibility of living in a universe that was not created for my existence, in which my actions and thoughts need not bend to the whims of a creator, far more enriching and meaningful than the other alternative.  In that sense, I view myself as an anti-theist rather than an atheist.





I'd like to linger on the concept of "nothing" for a moment, because I find it interesting. You have described three gradations of nothing—empty space, the absence of space, and the absence of physical laws. It seems to me that this last condition—the absence of any laws that might have caused or constrained the emergence of matter and space-time—really is a case of "nothing" in the strictest sense. It strikes me as genuinely incomprehensible that anything—laws, energy, etc.—could spring out of it. I don't mean to suggest that conceivability is a guide to possibility—there may be many things that happen, or might happen, which we are not cognitively equipped to understand. But the emergence of something from nothing (in this final sense) does strike me as a frank violation of the categories of human thought (akin to asserting that the universe is a round square), or the mere declaration of a miracle. Is there any physical reason to believe that such nothing was ever the case? Might it not be easier to think about the laws of physics as having always existed?



That's a very good question, and it actually strikes to the heart of one of the things I wanted to stress most in the book.  Because a frank violation of the categories of human thought is precisely what the Universe does all of the time.  Quantum mechanics, which governs the behavior of our Universe on very small scales, is full of such craziness, which defies common sense in the traditional sense.  So small squares are sometimes round..  namely systems can be in many different states at the same time, including ones which are mutually exclusive!  Crazy, I know, but true… That is the heart of why the quantum universe is so weird.  So, yes, it would be easier to think about the laws of physics as always having existed, but "easy" does not always coincide with "true."  Once again, my mantra:  The Universe is the way it is, whether we like it or not



Now to hit the second part of your question… do we have any reason to suppose the laws themselves came into existence along with our universe?  Yes… current ideas coming from particle physics allow a number of possibilities for multiple universes, in each of which some of the laws of physics, at least, would be unique to that universe.  Now, do we have any models where all the laws (including even, say, quantum mechanics?) came into being along with the universe?  No.  But we know so little about the possibilities that this certainly remains one of them. 



But even more germane to your question perhaps… do we have any physical reason to believe that such nothing was ever the case?  Absolutely, because we are talking about our universe, and that doesn't preclude our universe arising from precisely nothing, embedded in a perhaps infinite space, or infinite collection of spaces, or spaces-to-be, some of which existed before ours came into being, and some of which are only now coming into, or going out of existence.  In this sense, the multiverse, as it has become known, could be eternal, which certainly addresses one nagging aspect of the issue of First Cause.





I want to keep following this line, because it seems to me that we rarely do it—and I think many people will be interested to learn how a physicist like yourself views the foundations of science. As you know, in every branch of science apart from physics we stand upon an inherited set of concepts and laws that explain the whole enterprise. In neuroscience, for instance, we inherit the principles of chemistry and physics, and these explain everything from the behavior of neurons to the operation of our imaging tools. As one moves "up" in science, the problems become more complex (and for this reason the science inevitably gets "softer"), and we find very little reason to contemplate the epistemological underpinnings of science itself. So I'd like you to briefly tell us how you and your colleagues view the fact that certain descriptions of reality might be true, and testable, but impossible to understand. I had thought, for instance, that most physicists were unsatisfied with the strangeness of QM and still held out hope that a more fundamental theory would put things right, yielding a picture of reality that we could truly grasp, rather than merely accede to. Is that not true?



Another deep and difficult question Sam!  A full answer would probably take more room than we have here, and I have tried to address this issue to some extent both in A Universe from Nothing and my books Fear of Physics and Hiding in the Mirror. First of all, let me address the issue of "understanding." There are aspects of the universe, such as the fact that three-dimensional space can be curved, which cannot be "understood" in an intuitive sense because we are three-dimensional beings. Just like the two-dimensional beings in the famous book Flatland, who had no idea how to truly picture a sphere, we cannot visualize a three-dimensional closed universe, for example. This does not stop us, however, from developing mathematics that completely describes such a universe.  So, our mathematics can model such a universe and allow us to make predictions we can test, and therefore provide an "explanation" of the universe that is comprehensible, even if not intuitively understandable. 



But there is something even more profound about the nature of "scientific truth" that has arisen in physics, which I don't think is generally appreciated.  It is the simple fact that we realize that none of our theories are "true" in the sense that they adequately describe nature on all scales.  All of our physical theories, as we now understand them, have limited domains of validity, which we can actually quantify in an accurate way.  Even Quantum Electrodynamics, which is the best tested theory in nature, allowing us to predict the energy levels of atoms to better than 1 part in a billion, gets subsumed in a more general theory, called the Electroweak theory, when it is applied to trying to understand the interactions of quarks and electrons on scales 100 times smaller than the size of protons.  Now, as Richard Feynman emphasized, we have no idea if this process will continue, if we will peel back the layers of reality like an onion, whether the process will never end, or whether we will truly come up with a fundamental theory that allows us to extrapolate our understanding to all scales.  As he pointed out, it doesn't really matter, because what we scientists want to do is learn about how the universe works, and at each stage we learn something new.  We may hope the universe has some fundamental explanation, but as I keep emphasizing, the universe is the way it is, whether we like it or not, and our job is to be brave enough to keep trying to understand it better, and to accept the reality that nature imposes upon us.



It is true that some physicists find the strangeness of Quantum Mechanics unsatisfying and suspect that it might be embedded in a more fundamental theory that seems less crazy. But hope and reality are not the same thing.  Similarly, it may be intellectually unsatisfying to imagine that time began with our universe, so asking what came before is not a sensible question, or to imagine an eternal multiverse which itself was never created, or to never be able to empirically address the question of whether the laws of nature arose spontaneously along with the universe, but we have to keep plugging away regardless, motivated by the remarkable fact that nature has surprises in store for us that we never would have imagined!



Finally, it is the "how" question that is really most important, as I emphasize in the new book.  Whenever we ask "why?" we generally mean "How?", because why implies a sense of purpose that we have no reason to believe actually exists.  When we ask "Why are there 8 planets orbiting the Sun?" we really mean "How are there 8 planets?"—namely how did the evolution of the solar system allow the formation and stable evolution of 8 large bodies orbiting the Sun.  And thus, as I also emphasize, we may never be able to discern if there is actually some underlying universal purpose to the universe, although there is absolutely no scientific evidence of such purpose at this point, what is really important to understanding ourselves and our place in the universe is not trying to parse vague philosophical questions about something and nothing, but rather to try and operationally understand how our universe evolved, and what the future might bring.  Progress in physics in the past century has taken us to the threshold of addressing questions we might never have thought were approachable within the domain of science.  We may never fully resolve them, but the very fact that we can plausibly address them is worth celebrating. That is the purpose of my book.  And it is this intellectual quest that I find so very exciting, and which I want to share more broadly, because it represents to me the very best about what it means to be human. 



image



 

 •  0 comments  •  flag
Share on Twitter
Published on January 03, 2012 18:05

December 29, 2011

FREE WILL

free will book cover sam harris





In this elegant and provocative book, Sam Harris demonstrates—with great intellectual ferocity and panache—that free will is an inherently flawed and incoherent concept, even in subjective terms. If he is right, the book will radically change the way we view ourselves as human beings.



V.S. Ramachandran, Director of the Center for Brain and Cognition, UCSD, and author of The Tell-Tale Brain.




Brilliant and witty—and never less than incisive—Free Will shows that Sam Harris can say more in 13,000 words than most people do in 100,000.



Oliver Sacks




Free will is an illusion so convincing that people simply refuse to believe that we don't have it. In Free Will, Sam Harris combines neuroscience and psychology to lay this illusion to rest at last. Like all of Harris's books, this one will not only unsettle you but make you think deeply. Read it: you have no choice.



Jerry A. Coyne, Professor of Ecology and Evolution at The University of Chicago, and author of Why Evolution is True.




Many say that believing that there is no free will is impossible—or, if possible, will cause nihilism and despair. In this feisty and personal essay, Harris offers himself as an example of a heart made less self-absorbed, and more morally sensitive and creative, because this particular wicked witch is dead.



Owen Flanagan, Professor of Philosophy, Duke University, and author of The Really Hard Problem: Meaning in the Material World.




If you believe in free will, or know someone who does, here is the perfect antidote. In this smart, engaging, and extremely readable little book, Sam Harris argues that free will doesn't exist, that we're better off knowing that it doesn't exist, and that—once we think about it in the right way—we can appreciate from our own experience that it doesn't exist. This is a delightful discussion by one of the sharpest scholars around.



Paul Bloom, Professor of Psychology, Yale University, and author of How Pleasure Works.


 •  0 comments  •  flag
Share on Twitter
Published on December 29, 2011 08:48

December 18, 2011

Hitch

Christopher Hitchens



The moment it was announced that Christopher Hitchens was sick with cancer, eulogies began spilling into print and from the podium. No one wanted to deny the possibility that he would recover, of course, but neither could we let the admiration we felt for him go unexpressed. It is a cliché to say that he was one of a kind and none can fill his shoes—but Hitch was and none can. In his case not even the most effusive tributes ring hollow. There was simply no one like him.



One of the joys of living in a world filled with stupidity and hypocrisy was to see Hitch respond. That pleasure is now denied us. The problems that drew his attention remain—and so does the record of his brilliance, courage, erudition, and good humor in the face of outrage. But his absence will leave an enormous void in the years to come. Hitch lived an extraordinarily large life. (Read his memoir, Hitch-22, and marvel.) It was too short, to be sure—and one can only imagine what another two decades might have brought out of him—but Hitch produced more fine work, read more books, met more interesting people, and won more arguments than most of us could in several centuries.

I first met Hitch at a dinner at the end of April 2007, just before the release of his remarkable book god is not Great. After a long evening, my wife and I left him standing on the sidewalk in front of his hotel. His book tour was just beginning, and he was scheduled to debate on a panel the next morning. It was well after midnight, but it was evident from his demeanor that his clock had a few hours left to run. I had heard the stories about his ability to burn the candle at both ends, but staggering there alongside him in the glare of a street lamp, I made a mental note of what struck me as a fact of nature—tomorrow's panel would be a disaster.



I rolled out of bed the following morning, feeling quite wrecked, to see Hitch holding forth on C-SPAN's Book TV, dressed in the same suit he had been wearing the night before. Needless to say, he was effortlessly lucid and witty—and taking no prisoners. There should be a name for the peculiar cocktail of emotion I then enjoyed: one part astonishment, one part relief, two parts envy; stir. It would not be the last time I drank it in his honor.



Since that first dinner, I have felt immensely lucky to count Hitch as a friend and colleague—and very unlucky indeed not to have met him sooner. Before he became ill, I had expected to have many more years in which to take his company for granted. But our last meeting was in February of this year, in Los Angeles, where we shared the stage with two rabbis. His illness was grave enough at that point to make the subject of our debate—Is there an afterlife?—seem a touch morbid. It also made traveling difficult for him. I was amazed that he had made the trip at all.



The evening before the event, we met for dinner, and I was aware that it might be our last meal together. I was also startled to realize that it was our first meal alone. I remember thinking what a shame it was—for me—that our lives had not better coincided. I had much to learn from him.



I have been privileged to witness the gratitude that so many people feel for Hitch's life and work—for, wherever I speak, I meet his fans. On my last book tour, those who attended my lectures could not contain their delight at the mere mention of his name—and many of them came up to get their books signed primarily to request that I pass along their best wishes to him.  It was wonderful to see how much Hitch was loved and admired—and to be able to share this with him before the end.



I will miss you, brother.



 •  0 comments  •  flag
Share on Twitter
Published on December 18, 2011 08:05

Hitch

image



The moment it was announced that Christopher Hitchens was sick with cancer, eulogies began spilling into print and from the podium. No one wanted to deny the possibility that he would recover, of course, but neither could we let the admiration we felt for him go unexpressed. It is a cliché to say that he was one of a kind and none can fill his shoes—but Hitch was and none can. In his case not even the most effusive tributes ring hollow. There was simply no one like him.



One of the joys of living in a world filled with stupidity and hypocrisy was to see Hitch respond. That pleasure is now denied us. The problems that drew his attention remain—and so does the record of his brilliance, courage, erudition, and good humor in the face of outrage. But his absence will leave an enormous void in the years to come. Hitch lived an extraordinarily large life. (Read his memoir, Hitch-22, and marvel.) It was too short, to be sure—and one can only imagine what another two decades might have brought out of him—but Hitch produced more fine work, read more books, met more interesting people, and won more arguments than most of us could in several centuries.

I first met Hitch at a dinner at the end of April 2007, just before the release of his remarkable book god is not Great. After a long evening, my wife and I left him standing on the sidewalk in front of his hotel. His book tour was just beginning, and he was scheduled to debate on a panel the next morning. It was well after midnight, but it was evident from his demeanor that his clock had a few hours left to run. I had heard the stories about his ability to burn the candle at both ends, but staggering there alongside him in the glare of a street lamp, I made a mental note of what struck me as a fact of nature—tomorrow's panel would be a disaster.



I rolled out of bed the following morning, feeling quite wrecked, to see Hitch holding forth on C-SPAN's Book TV, dressed in the same suit he had been wearing the night before. Needless to say, he was effortlessly lucid and witty—and taking no prisoners. There should be a name for the peculiar cocktail of emotion I then enjoyed: one part astonishment, one part relief, two parts envy; stir. It would not be the last time I drank it in his honor.



Since that first dinner, I have felt immensely lucky to count Hitch as a friend and colleague—and very unlucky indeed not to have met him sooner. Before he became ill, I had expected to have many more years in which to take his company for granted. But our last meeting was in February of this year, in Los Angeles, where we shared the stage with two rabbis. His illness was grave enough at that point to make the subject of our debate—Is there an afterlife?—seem a touch morbid. It also made traveling difficult for him. I was amazed that he had made the trip at all.



The evening before the event, we met for dinner, and I was aware that it might be our last meal together. I was also startled to realize that it was our first meal alone. I remember thinking what a shame it was—for me—that our lives had not better coincided. I had much to learn from him.



I have been privileged to witness the gratitude that so many people feel for Hitch's life and work—for, wherever I speak, I meet his fans. On my last book tour, those who attended my lectures could not contain their delight at the mere mention of his name—and many of them came up to get their books signed primarily to request that I pass along their best wishes to him.  It was wonderful to see how much Hitch was loved and admired—and to be able to share this with him before the end.



I will miss you, brother.




10 likes ·   •  0 comments  •  flag
Share on Twitter
Published on December 18, 2011 07:05

December 7, 2011

COMING MARCH 6th

free will book cover sam harris


(Cover by David Drummond)



 

 •  0 comments  •  flag
Share on Twitter
Published on December 07, 2011 09:07

COMING FEBRUARY 28th

image



(Cover by David Drummond)




3 likes ·   •  1 comment  •  flag
Share on Twitter
Published on December 07, 2011 08:07

COMING MARCH 6th

image



(Cover by David Drummond)




 •  0 comments  •  flag
Share on Twitter
Published on December 07, 2011 08:07

November 29, 2011

Thinking about Thinking

image



Daniel Kahneman is an extraordinarily interesting thinker. As a psychologist, he received the 2002 Nobel Prize in Economics for his work with Amos Tversky on decision-making. Here is what Steven Pinker, my previous interview subject, recently wrote about him:



Daniel Kahneman is among the most influential psychologists in history and certainly the most important psychologist alive today. He has a gift for uncovering remarkable features of the human mind, many of which have become textbook classics and part of the conventional wisdom. His work has reshaped social psychology, cognitive science, the study of reason and of happiness, and behavioral economics, a field that he and his collaborator Amos Tversky helped to launch. The appearance of Thinking, Fast and Slow is a major event.




Kahneman was kind enough to take time out of a very busy book tour to answer a few of my questions.

***



Much of your work focuses on the limitations of human intuition. Do you have any advice about when people should be especially hesitant to trust their intuitions?



When the stakes are high.  We have no reason to expect the quality of intuition to improve with the importance of the problem.  Perhaps the contrary: High-stake problems are likely to involve powerful emotions and strong impulses to action. If there is no time to reflect, then intuitively guided action may be better than freezing or paralysis, especially for the experienced decision maker. If there is time to reflect, slowing down is likely to be a good idea. The effort invested in "getting it right" should be commensurate with the importance of the decision.



Are there times when reasoning is suspect and we are wise to rely on our snap judgments?



As Gary Klein has emphasized (Sources of Power is one of my favorite books), true experts—those who have had sufficient practice to detect the regularities of their environment—may do better when they follow their intuition than when they engage in complex analysis. Tim Wilson and his collaborators have demonstrated that people who choose between two decorative objects do better by following their impulse than by protracted analysis of pros and cons. The critical test in that experiment is how much they will like the chosen object after living with it for a while. Affective forecasting based on current feelings appears to be more accurate than systematic analysis that eliminates those feelings.



What is the difference between the "experiencing self" and the "remembering self"?



The experiencing self lives in the moment; it is the one that answers the question "does it hurt?"  or "what were you thinking about just now?"  The remembering self is the one that answers questions about the overall evaluation of episodes or periods of one's life, such as a stay in the hospital or the years since one left college. It involves both retrieval and temporal integration of diverse experiences. In the context of studies of subjective well-being, the happiness of the experiencing self is assessed by integrating momentary happiness over time (in the 19th century, the economist Francis Edgeworth spoke of "the area under the curve"). Experienced happiness refers to your feelings, to how happy you are as you live your life. In contrast, the satisfaction of the remembering self refers to your feelings when you think about your life.



Isn't the remembering self just the experiencing self in one of its modes?



Of course. Thinking about your life is an experience that you have. But it is useful to distinguish these relatively rare moments from the routine emotional quality of your life. The distinction is especially important in evaluating an individual's well-being, because the determinants of experienced happiness and life satisfaction are substantially different.



How should the split between these two points of view affect our understanding of the good life?



Some conceptions of the good life take the Aristotelian view to the extreme of denying altogether the relevance of subjective well-being. For those who do not want to go that far, the distinction between experienced happiness and life satisfaction raises serious problems. In particular, there appears to be little hope for any unitary concept of subjective well-being. I used to hold a unitary view, in which I proposed that only experienced happiness matters, and that life satisfaction is a fallible estimate of true happiness. I eventually concluded that this view is not tenable, for one simple reason: people seem to be much more concerned with the satisfaction of their goals than with the achievement of experienced happiness. A definition of subjective well-being that ignores people's goals is not tenable. On the other hand, an exclusive focus on satisfaction is not tenable either. If two people are equally satisfied (or unsatisfied) with their lives but one of them is almost always smiling happily and the other is mostly miserable, will we ignore that in assessing their well-being?



Are there ways to get the two selves to converge? If so, would this be normative?



There is a road to convergence, but few will want to take it:  we could suggest to people that they should adopt experienced happiness as their main goal, and be satisfied with their lives to the extent that this goal is achieved. This idea implies the abandonment of other goals and values, which is surely unappealing.



The other possibility is a redefinition of momentary happiness in terms of a more global evaluation. That is the route we took in an article I published in 1997 with Peter Wakker and Rakesh Sarin. We suggested that the first step is to evaluate lives (or segments of life) as profiles of experience, and rank the lives. In the next step one would rescale momentary utility to transform the profiles so as to achieve the same ranking   So, for example, if the frequency and intensity of aesthetic experiences are important to our global evaluation of lives, then the value (or utility) of moments of aesthetic joy would be raised accordingly. The problem, of course, is that we no longer accept the subject's own judgment of how happy they are at any moment. So this approach is difficult as well.



To what extent to do you think true self-deception (as opposed to simple bias) exists?



I don't know how you expect to distinguish true self-deception from simple bias. Suppose you like someone very much. Then by a familiar halo effect you will also be prone to believe many good things about that person—you will be biased in their favor. Most of us like ourselves very much, and that suffices to explain self-assessments that are biased in a particular direction. You will believe these biased assessments regardless of whether they are about you or about someone else. We resist evidence that threatens our positive image of people we love. And perhaps we love ourselves more intensely than we love most (or all) others. When does this become self-deception?



image



 

 •  0 comments  •  flag
Share on Twitter
Published on November 29, 2011 11:26

Thinking about Thinking

image



Daniel Kahneman is an extraordinarily interesting thinker. As a psychologist, he received the 2002 Nobel Prize in Economics for his work with Amos Tversky on decision-making. Here is what Steven Pinker, my previous interview subject, recently wrote about him:



Daniel Kahneman is among the most influential psychologists in history and certainly the most important psychologist alive today. He has a gift for uncovering remarkable features of the human mind, many of which have become textbook classics and part of the conventional wisdom. His work has reshaped social psychology, cognitive science, the study of reason and of happiness, and behavioral economics, a field that he and his collaborator Amos Tversky helped to launch. The appearance of Thinking, Fast and Slow is a major event.



Kahneman was kind enough to take time out of a very busy book tour to answer a few of my questions.

***



Much of your work focuses on the limitations of human intuition. Do you have any advice about when people should be especially hesitant to trust their intuitions?



When the stakes are high.  We have no reason to expect the quality of intuition to improve with the importance of the problem.  Perhaps the contrary: High-stake problems are likely to involve powerful emotions and strong impulses to action. If there is no time to reflect, then intuitively guided action may be better than freezing or paralysis, especially for the experienced decision maker. If there is time to reflect, slowing down is likely to be a good idea. The effort invested in "getting it right" should be commensurate with the importance of the decision.



Are there times when reasoning is suspect and we are wise to rely on our snap judgments?



As Gary Klein has emphasized (Sources of Power is one of my favorite books), true experts—those who have had sufficient practice to detect the regularities of their environment—may do better when they follow their intuition than when they engage in complex analysis. Tim Wilson and his collaborators have demonstrated that people who choose between two decorative objects do better by following their impulse than by protracted analysis of pros and cons. The critical test in that experiment is how much they will like the chosen object after living with it for a while. Affective forecasting based on current feelings appears to be more accurate than systematic analysis that eliminates those feelings.



What is the difference between the "experiencing self" and the "remembering self"?



The experiencing self lives in the moment; it is the one that answers the question "does it hurt?"  or "what were you thinking about just now?"  The remembering self is the one that answers questions about the overall evaluation of episodes or periods of one's life, such as a stay in the hospital or the years since one left college. It involves both retrieval and temporal integration of diverse experiences. In the context of studies of subjective well-being, the happiness of the experiencing self is assessed by integrating momentary happiness over time (in the 19th century, the economist Francis Edgeworth spoke of "the area under the curve"). Experienced happiness refers to your feelings, to how happy you are as you live your life. In contrast, the satisfaction of the remembering self refers to your feelings when you think about your life.



Isn't the remembering self just the experiencing self in one of its modes?



Of course. Thinking about your life is an experience that you have. But it is useful to distinguish these relatively rare moments from the routine emotional quality of your life. The distinction is especially important in evaluating an individual's well-being, because the determinants of experienced happiness and life satisfaction are substantially different.



How should the split between these two points of view affect our understanding of the good life?



Some conceptions of the good life take the Aristotelian view to the extreme of denying altogether the relevance of subjective well-being. For those who do not want to go that far, the distinction between experienced happiness and life satisfaction raises serious problems. In particular, there appears to be little hope for any unitary concept of subjective well-being. I used to hold a unitary view, in which I proposed that only experienced happiness matters, and that life satisfaction is a fallible estimate of true happiness. I eventually concluded that this view is not tenable, for one simple reason: people seem to be much more concerned with the satisfaction of their goals than with the achievement of experienced happiness. A definition of subjective well-being that ignores people's goals is not tenable. On the other hand, an exclusive focus on satisfaction is not tenable either. If two people are equally satisfied (or unsatisfied) with their lives but one of them is almost always smiling happily and the other is mostly miserable, will we ignore that in assessing their well-being?



Are there ways to get the two selves to converge? If so, would this be normative?



There is a road to convergence, but few will want to take it:  we could suggest to people that they should adopt experienced happiness as their main goal, and be satisfied with their lives to the extent that this goal is achieved. This idea implies the abandonment of other goals and values, which is surely unappealing.



The other possibility is a redefinition of momentary happiness in terms of a more global evaluation. That is the route we took in an article I published in 1997 with Peter Wakker and Rakesh Sarin. We suggested that the first step is to evaluate lives (or segments of life) as profiles of experience, and rank the lives. In the next step one would rescale momentary utility to transform the profiles so as to achieve the same ranking   So, for example, if the frequency and intensity of aesthetic experiences are important to our global evaluation of lives, then the value (or utility) of moments of aesthetic joy would be raised accordingly. The problem, of course, is that we no longer accept the subject's own judgment of how happy they are at any moment. So this approach is difficult as well.



To what extent to do you think true self-deception (as opposed to simple bias) exists?



I don't know how you expect to distinguish true self-deception from simple bias. Suppose you like someone very much. Then by a familiar halo effect you will also be prone to believe many good things about that person—you will be biased in their favor. Most of us like ourselves very much, and that suffices to explain self-assessments that are biased in a particular direction. You will believe these biased assessments regardless of whether they are about you or about someone else. We resist evidence that threatens our positive image of people we love. And perhaps we love ourselves more intensely than we love most (or all) others. When does this become self-deception?



image



 

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on November 29, 2011 10:26

November 26, 2011

The Unbelievers

By Emily Brennan



Go to article



image

 •  0 comments  •  flag
Share on Twitter
Published on November 26, 2011 11:25

Sam Harris's Blog

Sam Harris
Sam Harris isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Sam Harris's blog with rss.