Making Sense Quotes
Making Sense
by
Sam Harris2,206 ratings, 4.13 average rating, 209 reviews
Making Sense Quotes
Showing 1-30 of 36
“What we call reality is just when we all agree about our hallucinations.”
― Making Sense
― Making Sense
“The sea squirt—a very simple marine creature—swims about during its juvenile phase looking for a place to settle, and once it settles and starts filter feeding, it digests its own brain, because it no longer has any need for perceptual or motor competence. This is often used as an unkind analogy for getting tenure in academia.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“If your denial of death is sufficiently explicit and persuasive that you believe death isn't real, then what you deny isn't death but the significance of life.”
― Making Sense
― Making Sense
“I take seriously the idea that we’re in a simulation. I have no idea whether or not it’s true, but if it is, if we are in a simulation, it’s not that nothing is real, not that there are no tables and chairs and trees. Rather, it’s that they exist in a different form from what we first thought. There’s a level of computation underneath what we take to be physical reality.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“There’s something it’s like for me to see the green leaves outside my window right now, so that’s a conscious state to me. But there may be some unconscious language-processing going on in my head that doesn’t feel like anything to me, or some motor processes in the cerebellum. Those might be states of me, but they’re not conscious states of me, because there’s nothing it’s like for me to undergo those states.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“Chalmers: It’s awfully hard to define consciousness. But I’d start by saying that it’s the subjective experience of the mind and the world. It’s basically what it feels like, from the first-person point of view, to be thinking and perceiving and judging.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“What makes me a scientist is that I’d much rather have questions I can’t answer than answers I can’t question.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“My friend the musician and playwright Baba Brinkman—whom I worked with on The Rap Guide to Consciousness—put it beautifully: “What we call reality is just when we all agree about our hallucinations.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“So to make a little quibble of my own, I think the essence of what we want in science are not justified beliefs but good explanations.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“Tegmark: That’s right, and there’s a more elemental example. In a certain sense, your genes have invented you. They built your brain so that you could make copies of your genes. That’s why you like to eat—so you won’t starve to death. And that’s why we fall in love—to make copies of our genes, right? But even though we know this, we still choose to use birth control, which is the opposite of what our genes want.
Some people dismiss the idea that there will ever be anything smarter than humans for mystical reasons—because they think there’s something more than quarks and electrons and information processing going on in us. But if you take the scientific approach, that you really are your quarks, then there’s clearly no physical law of physics that precludes anything more intelligent than a human. We were constrained by how many quarks you could fit into a skull, and things like that—constraints that computers don’t have. It becomes instead more a question of time. And, as you said, there’s a relentless pressure to make smarter things, because it’s profitable and interesting and useful. The question isn’t if this will happen, but when. And finally, to come back to those ants. Suppose you’re in charge of a huge green-energy project, and just as you’re about to let the water flood the hydroelectric dam you’ve built, someone points out that there’s an anthill right in the middle of the flood zone. Now, you know the ants don’t want to be drowned, right? So you have to make a decision. What are you going to do?
Harris: Well, in that case, too bad for the ants.
Tegmark: Exactly. So we ought to plan ahead. We don’t want to end up like the ants.”
― Making Sense
Some people dismiss the idea that there will ever be anything smarter than humans for mystical reasons—because they think there’s something more than quarks and electrons and information processing going on in us. But if you take the scientific approach, that you really are your quarks, then there’s clearly no physical law of physics that precludes anything more intelligent than a human. We were constrained by how many quarks you could fit into a skull, and things like that—constraints that computers don’t have. It becomes instead more a question of time. And, as you said, there’s a relentless pressure to make smarter things, because it’s profitable and interesting and useful. The question isn’t if this will happen, but when. And finally, to come back to those ants. Suppose you’re in charge of a huge green-energy project, and just as you’re about to let the water flood the hydroelectric dam you’ve built, someone points out that there’s an anthill right in the middle of the flood zone. Now, you know the ants don’t want to be drowned, right? So you have to make a decision. What are you going to do?
Harris: Well, in that case, too bad for the ants.
Tegmark: Exactly. So we ought to plan ahead. We don’t want to end up like the ants.”
― Making Sense
“I talk a lot about this in the book. Why is it that our universe gets gradually more complex? Once you get into biology, the fundamental reason is that if you’re living in a complex environment, then the smarter you are the more successful you’ll be, because you can exploit regularities in the environment to your advantage. Eventually all the other organisms are motivated, in turn, to get smarter. As organisms get smarter and smarter, they keep creating an ever more complex environment for one another, and they all get smarter.”
― Making Sense
― Making Sense
“Harris: Let’s talk about how the AI future might look. It seems to me there are three paths it could take. First, we could remain fundamentally in charge: that is, we could solve the value-alignment problem, or we could successfully contain this god in a box. Second, we could merge with the new technology in some way—this is the cyborg option. Or third, we could be totally usurped by our robot overlords. It strikes me that the second outcome, the cyborg option, is inherently unstable. This is something I’ve talked to Garry Kasparov about. He’s a big fan of the cyborg phenomenon in chess. The day came when the best computer in the world was better than the best human—that is, Garry. But now the best chess player in the world is neither a computer nor a human, but a human/computer team called a cyborg, and Garry seemed to think that that would continue for quite some time.
Tegmark: It won’t.
Harris: It seems rather obvious that it won’t. And once it doesn’t, that option will be canceled just as emphatically as human dominance in chess has been canceled. And it seems to me that will be true for every such merger. As the machines get better, keeping the ape in the loop will just be adding noise to the system.”
― Making Sense
Tegmark: It won’t.
Harris: It seems rather obvious that it won’t. And once it doesn’t, that option will be canceled just as emphatically as human dominance in chess has been canceled. And it seems to me that will be true for every such merger. As the machines get better, keeping the ape in the loop will just be adding noise to the system.”
― Making Sense
“DEUTSCH: Yes, but you have to distinguish between hardware and software when you’re thinking about how this cognitive closure manifests itself. Like I said, it seems plausible that the hardware limitation is not relevant even for chimpanzees. I imagine that with nanosurgery, one could implant ideas into a chimpanzee’s brain that would make it able to create further knowledge just as humans can. I’m questioning the assumption that if everybody with an IQ of over a hundred died, then in the next generation nobody would have an IQ of over a hundred. I think they well might. It depends on culture.
HARRIS: Of course. This wasn’t meant to be a plausible biological or cultural assumption. I’m just asking you to imagine a world in which we had seven billion human beings, none of whom could begin to understand what Alan Turing was up to.
DEUTSCH: That nightmare scenario is different. It’s something that actually happened—for almost the whole of human existence. Humans had the ability to be creative and to do everything we’re doing. They just didn’t, because their culture was wrong. It wasn’t their fault. Cultural evolution has a nasty tendency to suppress the growth of what we would consider science or anything important that would improve their lives. So yes, that’s possible, and it’s possible that it could happen again. Nothing can prevent it except our working to prevent it.”
― Making Sense
HARRIS: Of course. This wasn’t meant to be a plausible biological or cultural assumption. I’m just asking you to imagine a world in which we had seven billion human beings, none of whom could begin to understand what Alan Turing was up to.
DEUTSCH: That nightmare scenario is different. It’s something that actually happened—for almost the whole of human existence. Humans had the ability to be creative and to do everything we’re doing. They just didn’t, because their culture was wrong. It wasn’t their fault. Cultural evolution has a nasty tendency to suppress the growth of what we would consider science or anything important that would improve their lives. So yes, that’s possible, and it’s possible that it could happen again. Nothing can prevent it except our working to prevent it.”
― Making Sense
“That is the wrong way to think about perception. Let’s simplify it. The problem is something like the following: The brain is locked inside a bony skull, and let’s assume for the sake of this argument that the perception problem is the problem of figuring out what’s out there in the world giving rise to sensory signals impinging on our sensory surfaces—eyes and ears, and so on. Now, these sensory signals are noisy and ambiguous. They won’t have a one-to-one mapping with things out there in the world, whatever those may be. They don’t come labeled for the brain with convenient tags like “this is vision” or “this is hearing.” So perception has to involve a process of inference, of “best guessing,” in which the brain combines the sensory data with prior expectations or (usually implicit) beliefs about the way the world is, to come up with its best guess about the causes of that sensory data. Within this framework, what we perceive is constituted by those multilevel predictions that try to account for the sensory signals. We perceive what the brain infers caused those signals, not the sensory signals themselves, nor things in the world “in themselves” either. There is no such thing as “direct perception” of the world or of the self.”
― Making Sense
― Making Sense
“The other important aspect of the “interoceptive inference” view is that the purpose of perceiving the body from within has little to do with figuring out what’s there. My brain couldn’t care less that my internal organs are objects with particular locations within my body. The only thing that’s important about my internal physiology is that it works, that it keeps me alive. The brain cares primarily about control and regulation of the body’s internal state. So perceptual predictions for the body’s interior are of a very different kind: they’re instrumental, they’re control-oriented, they’re not epistemic, they’re not to do with “finding out.” For me, this is suggestive of why our experiences of being a body have this nonobject-based phenomenological character, compared to our experiences of the outside world. More speculatively, there is the idea that all forms of perception, conscious and unconscious, derive from this fundamental imperative for physiological regulation. If we understand that the original (evolutionary) purpose of predictive perception was to control and regulate the internal state of the body, and that all the other kinds of perceptual prediction are built on that evolutionary imperative, then ultimately the way we perceive the outside world is predicated on these mechanisms that have their primary objective in the regulation of an internal bodily state.
This idea is really important for me, because it gets away from pretheoretical associations of consciousness and perception with cognition, with language, and maybe also with social interaction—all “higher order” properties of cognition. Instead, it grounds consciousness and perception much more strongly in the basic mechanisms of life. It might not just be that life provides a nice analogy with consciousness in terms of hard problems and mysteries, but that there are actually deep obligate connections between mechanisms of life and the way we perceive, consciously and unconsciously, ourselves and the world.”
― Making Sense
This idea is really important for me, because it gets away from pretheoretical associations of consciousness and perception with cognition, with language, and maybe also with social interaction—all “higher order” properties of cognition. Instead, it grounds consciousness and perception much more strongly in the basic mechanisms of life. It might not just be that life provides a nice analogy with consciousness in terms of hard problems and mysteries, but that there are actually deep obligate connections between mechanisms of life and the way we perceive, consciously and unconsciously, ourselves and the world.”
― Making Sense
“Just as we perceive the outside world on the basis of sensory signals met with a top-down flow of perceptual expectations and predictions, the same applies to perceptions of the internal state of the body. The brain has to know what the internal state of the body is like. It doesn’t have direct access to it, even though both the brain and body happen to be wrapped within a single layer of skin. As with perception of the outside world, all the brain gets from the inside of the body are noisy, ambiguous electrical signals. Therefore it has to bring to bear predictions and expectations in order to make sense of the barrage of sensory signals coming from inside the body, in just the same way as for vision and all the other “classic” senses. And this is what’s collectively called interoception—perception of the body from within. The same computational principles apply. In this view, we can think of emotional conscious experiences, feeling states, in the framework of “interoceptive inference.” So emotions become predictions—“best guesses”—about the hidden causes of interoceptive signals, in the same way that experiences of the outside world are constituted by predictions of the causes of sensory signals.
This gives a nice computational and mechanistic gloss to old theories of emotion that originated with William James and Karl Lange—that emotion has to do with perception of physiological change in the body and by the subsequent cognitive “appraisal” of these changes. The predictive-processing view adds to these theories by saying that emotional experience is the joint content of predictions about the causes of interoceptive signals at all levels of abstraction.”
― Making Sense
This gives a nice computational and mechanistic gloss to old theories of emotion that originated with William James and Karl Lange—that emotion has to do with perception of physiological change in the body and by the subsequent cognitive “appraisal” of these changes. The predictive-processing view adds to these theories by saying that emotional experience is the joint content of predictions about the causes of interoceptive signals at all levels of abstraction.”
― Making Sense
“HARRIS: It’s worth emphasizing the connection between perception and action. It’s one thing to talk about it in the context of catching a cricket ball, but when you talk about the evolutionary logic of having developed perceptual capacities in the first place, the link to action becomes even more explicit. We haven’t evolved to perceive the world as it is for some abstract epistemological reason. We’ve evolved to perceive what’s biologically useful. And what’s biologically useful is always connected—at least when we’re talking about the outside world—to actions. If you can’t move, if you can’t act in any way, there would have been very little reason to evolve a capacity for sight, for instance.
SETH: Absolutely. The sea squirt—a very simple marine creature—swims about during its juvenile phase looking for a place to settle, and once it settles and starts filter feeding, it digests its own brain, because it no longer has any need for perceptual or motor competence. This is often used as an unkind analogy for getting tenure in academia. But you’re absolutely right: perception is not about figuring out what’s really there. We perceive the world as it’s useful for us to do so.
This is particularly important when we think about perception of the internal state of the body, which we mentioned earlier. Brains are not for perceiving the world as it is. They didn’t evolve for doing philosophy or complex language, they evolved to guide action. But even more fundamentally, brains evolved to keep themselves and their bodies alive. The most basic cycle of perception and action doesn’t involve the outside world or the exterior surfaces of the body at all. It’s all about regulating the internal physiology of the body and keeping it within bounds compatible with survival. This gives us a clue about why experiences of mood and emotion, and the basic experiences of selfhood, have a fundamentally nonobject-like character.”
― Making Sense
SETH: Absolutely. The sea squirt—a very simple marine creature—swims about during its juvenile phase looking for a place to settle, and once it settles and starts filter feeding, it digests its own brain, because it no longer has any need for perceptual or motor competence. This is often used as an unkind analogy for getting tenure in academia. But you’re absolutely right: perception is not about figuring out what’s really there. We perceive the world as it’s useful for us to do so.
This is particularly important when we think about perception of the internal state of the body, which we mentioned earlier. Brains are not for perceiving the world as it is. They didn’t evolve for doing philosophy or complex language, they evolved to guide action. But even more fundamentally, brains evolved to keep themselves and their bodies alive. The most basic cycle of perception and action doesn’t involve the outside world or the exterior surfaces of the body at all. It’s all about regulating the internal physiology of the body and keeping it within bounds compatible with survival. This gives us a clue about why experiences of mood and emotion, and the basic experiences of selfhood, have a fundamentally nonobject-like character.”
― Making Sense
“HARRIS: But if substrate independence is the case, and you could have the appropriately organized system made of other material, or even simulated—it can just be on the hard drive of some supercomputer—then you could imagine, even if you needed some life course of experience in order to tune up all the relevant variables, there could be some version of doing just that, across millions of simulated experiments and simulated worlds, and you would wind up with conscious minds in those contexts. Are you skeptical of that possibility?
SETH: Yes, I’m skeptical of that, because I think there’s a lot of clear air between saying the physical state of a system is what matters, and that simulation is sufficient. First, it’s not clear to me what “substrate independence” really means. It seems to turn on an overzealous application of the hardware/software distinction—that the mind and consciousness is just a matter of getting the functional relations right and it doesn’t matter what hardware or wetware you run it on. But it’s unclear whether I can really partition how a biological system like the brain works according to these categories. Where does the wetware stop and the mindware start, given that the dynamics of the brain are continually reshaping the structure and the structure is continually reshaping the dynamics? It becomes a bit difficult to define what the substrate really is. Of course, if you’re willing to say, “Well, we’re not just capturing input-output relations, we’re going to make an exact physical duplicate,” then that’s fine. That’s just a statement about materialism. But I don’t find it intuitive to go from making an exact physical replicate, all the way up to simulations, and therefore simulations of lots of possible life histories, and so on. It’s really not clear to me that simulation will ever be sufficient to instantiate phenomenal properties.”
― Making Sense
SETH: Yes, I’m skeptical of that, because I think there’s a lot of clear air between saying the physical state of a system is what matters, and that simulation is sufficient. First, it’s not clear to me what “substrate independence” really means. It seems to turn on an overzealous application of the hardware/software distinction—that the mind and consciousness is just a matter of getting the functional relations right and it doesn’t matter what hardware or wetware you run it on. But it’s unclear whether I can really partition how a biological system like the brain works according to these categories. Where does the wetware stop and the mindware start, given that the dynamics of the brain are continually reshaping the structure and the structure is continually reshaping the dynamics? It becomes a bit difficult to define what the substrate really is. Of course, if you’re willing to say, “Well, we’re not just capturing input-output relations, we’re going to make an exact physical duplicate,” then that’s fine. That’s just a statement about materialism. But I don’t find it intuitive to go from making an exact physical replicate, all the way up to simulations, and therefore simulations of lots of possible life histories, and so on. It’s really not clear to me that simulation will ever be sufficient to instantiate phenomenal properties.”
― Making Sense
“Yes. This gradual walk toward not taking human suffering seriously anymore is something we experience in context-dependent ways already, and when you think of the global implications it’s scary to consider how malleable our experience might be. I’m thinking of a few local cases, like how surgeons and ER doctors need to inure themselves to the constant evidence of other people’s suffering, because otherwise they can’t get the job done. And every parent knows what it’s like to understand that the suffering of one’s three-year-old who bursts into tears over a lost toy is not something that needs to concern you as much as an adult bursting into tears over something else, and yet that suffering is no less vivid for the child. You can imagine that kind of immunity to the evident pain of other, seemingly conscious systems, growing over time. Life could seem more and more like a video game where everyone else becomes a prop.”
― Making Sense
― Making Sense
“This is a constraint of evolution. We weren’t built to acquire new cognitive abilities de novo. The only materials used for modern human cognition are these ancient structures that have to be commandeered to new purposes. Everything we do is built on the back of these apeish structures. Here we’re talking about the insular cortex, which receives the inputs from the viscera. You find rotting food disgusting—that’s the tale told by the insula. And the only way to build a mind that can find abstract ideas unacceptable is to repurpose, or extend the purpose, of those brain areas.”
― Making Sense
― Making Sense
“Well, let’s make it simpler. Let’s say we found a culture on an island somewhere that was removing the eyeballs of every third child. Would you then agree that we had found a culture that was not perfectly maximizing human well-being?”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“Either consciousness is epiphenomenal or it’s outside a physical system but somehow playing a role in physics. That’s a more traditional, dualist possibility. Or there’s a third possibility: Consciousness is somehow built in at the fundamental level of physics.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“This is one of the things that concerns me about AI. It seems increasingly likely that we will build machines that will seem conscious, and the effect could be so convincing that we might lose sight of the hard problem. It could cease to seem philosophically interesting, or even ethically appropriate, to wonder whether there is something it is like to be one of these robots. And yet we still won’t know whether they are actually conscious unless we have understood how consciousness arises in the first place—which is to say, unless we have solved the hard problem.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“Believe in truth. To abandon facts is to abandon freedom. If nothing is true, then no one can criticize power, because there is no basis upon which to do so. If nothing is true, then all is spectacle. The biggest wallet pays for the most blinding lights.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“We're not really free unless we can put matters in our own words. And if we can't put them in our own words, we can't talk to other people, because if we speak the words of the internet or the TV news, other people will recognize that, and they are not really in our company, but somewhere else.”
― Making Sense
― Making Sense
“The fascist says, "It's not what you think, or what you think you know, that is important. The only truth is whether or not you feel subjectively, spiritually, part of a larger national community __ and if you do, wonderful! And if you don't, then you're an enemy.”
― Making Sense
― Making Sense
“Every person is a puppet who didn't pick his own strings and those strings reach back to the big bang.”
― Making Sense
― Making Sense
“there is no divine purpose, only a plurality of human purposes.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“if you look at how our universe got this way, and how this conversation we’re having came about, it’s because about 1078 quarks and electrons started out in a particular way early on, after inflation. Which led to the formation of our solar system and our planet, and our parents met, and so on, and we met, and then this conversation happened, right? If you’d started the quarks and electrons out a little differently, however, things would have unfolded differently. You can actually count up how many different ways you can arrange the quarks and electrons in our universe. It turns out it’s only about a googolplex different ways. A googolplex is one with a googol zeros, and a googol is one with a hundred zeros. So it’s a huge number, but it’s finite.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
“Kahneman: I actually had the experience that you described. I gave a talk on that topic, and somebody got up during the Q&A, and told a story to the audience. The story was, “The other week, I was listening to a glorious symphony [that was certainly the period when people would listen to records still] and just as it was going to end, there was that horrible screech, and it ruined the whole experience.” I said, “It didn’t ruin the experience. You’d had the experience—you had twenty minutes of glorious music. It ruined the memory of the experience.” People cannot draw that distinction. For him, it ruined the experience because the memory is what he got to keep.”
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
― Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
