One of my favorite thought experiments. Alluding to it in a Quora discussion on a somewhat bizarre question got me a big chunk of upvotes. That's becaOne of my favorite thought experiments. Alluding to it in a Quora discussion on a somewhat bizarre question got me a big chunk of upvotes. That's because the thought experiement matters — software engineers have to think about things like this when they're programming those autonomous vehicles we're all getting excited about (here's a paper at Science, if you want details).
One of my favorite thought experiments. Alluding to it in a Quora discussion on a somewhat bizarre question got me a big chunk of upvotes. That's becaOne of my favorite thought experiments. Alluding to it in a Quora discussion on a somewhat bizarre question got me a big chunk of upvotes. That's because the thought experiement matters — software engineers have to think about things like this when they're programming those autonomous vehicles we're all getting excited about (here's a paper at Science, if you want details).
I heard about this on the KQED Forum podcast. I'm curious whether he tapped into the research of Dan Kahan, but the book sounds interesting regardlessI heard about this on the KQED Forum podcast. I'm curious whether he tapped into the research of Dan Kahan, but the book sounds interesting regardless....more
• The foregoing doesn’t explicitly link to Bostrom’s book so this might not be right — it spends too much time on Kurzweil’s thesis, so I suspect it’s not the correct one. And the author has drunk the Kurzweil Kool-Aid and it enthusiastically peddling it to others without any critical evaluations. Of course, everything here lies at the intersection of advanced software engineering, AI research, neurology, cognitive science, economics, and maybe even a few other fields, which is why so many very intelligent and highly educated people can talk about it and be fundamentally off track.
Oh, but there’s plenty more, anyway:
The Telegraph UK • I’m bumping this one to the top because it presents both the problem Bostrom is dealing with as well as the difficulties of his text in a more engaging style.
The Economist • Good overview. Doesn’t go far enough into details to make any errors, but a bit deeper than some of the other short reviews.
The Guardian [also discusses A Rough Ride to the Future] • Short and superficial, but good. The Lovelock portion is amusing, calling out his conclusion that manmade climate change isn’t an existential threat, but that while it “could mean a bumpy ride over the next century or two, with billions dead, it is not necessarily the end of the world”. Personally, I agree, but think that the concomitant economic collapse puts the timeframe at many more centuries.
Financial Times • The Guardian article, above, cites that “Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090”, whereas this Financial Times article says “About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075.”
That’s quite a difference, although they might be reporting different ends of a confidence range, I suppose. But the second half of the FT quote makes me suspicious the reviewer is tossing in some minor distortion to slightly sensationalize the story (which might be worthwhile). There is somewhat more detail than some of the other reviews, but not much. The style is more evocative of the threat, though.
Reason.com • This one isn’t only a review, since the author also injects a few opinions about what might or might be possible (based, presumably, on his exposure to other arguments as a science writer). In covering more ground, though, the essay makes implicit assumptions which might or might not be in Bostrom’s book.
For example, he says, “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” This is a very common assumption that really needs to be carefully examined, though. If the goal of AGI is to create a being that thinks more-or-less like a human, why would it have any special skill in improving itself? We humans are really very good at that, after all.
I especially like that his essay starts and ends with references to Frank Herbert’s Dune, which (among its other excellences) envisions a human prohibition on machines that think. Something like this appears, to me, to be one of the few ways that leave the human race in existence and in control of its own destiny for the long term, and I even perceive a path to it, although hopefully without quite as much war. The explosion of functional AI (called “narrow” by some) seems likely to devastate human employment in the coming decades, which will hopefully be before any superintelligence as been created as our replacement and/or ruler. It is plausible that our reaction to the first crisis might be something that prevents the second. Good luck, kids!
This essay thankfully has some critical thinking applied to some of the assumptions that appear to be in Bostrom’s book. The author wastes a paragraph with “prior AI can’t do X, so why should we assume future AI can?”, ignoring that this is what progress is all about.
But then he jumps into the meat, and points out that there are fundamental obstacles to sentience that aren’t often addressed, such as volition — sentient creatures do what they want, but what does "want" even mean, and how do we write it as a computer program?
Salon • Short, and more amusing than most, and at least hints at some of the flawed thinking that often goes into this analysis. But it doesn’t go into too much detail, probably assuming that the typical Salon reader is somewhat aware of the debate already. (Also amusing is that the text seems to be an almost perfect transcription from audio, with only a few strange mistakes, such as “10” for “then” and “quarters” for “cars”. But that’s probably a human transcriptionist error, not an AI error.)
Less Wrong • This contains some visualizations that apparently compliment Bostrom’s text. Short and too the point.
Wikipedia • Good but very superficial overview. That there is no “criticism” section surprises and disappoints me.
New York Times • Not explicitly about Borstom’s book. And like most authors, he conflates AGI and functional AI, and assumes AGI will retain the capabilities of specific-function software.
New York Review of Books [paywall; also pretends to review The 4th Revolution] • I thought my library might give me access to the inside of their paywall, but it doesn’t. Still, because this was written by the famous philosopher (and AI curmudgeon) John Searle, and is titled “What Your Computer Can’t Know”, it seemed likely to be much more interesting that most of the others listed here. So I looked a little harder, and discovered that (no surprise) someone has put the text elsewhere on the ’net (I’ll let you do your own Googling).
Searle effectively throws out the underlying premises — he famously believes that “strong AI” is actually quite impossible, since a machine cannot think. I’m not going into this here; check out the Wikipedia article on his Chinese Room thought experiment if you don’t already know it.
My personal evaluation of his Chinese Room analogy is that he’s wrong, but many professional philosophers, etc., have explained my conclusion, as well as many other “replies” better than I ever could. So this critique of the book was really a disappointment.
There might be more, but I think that’s enough.
• • • • • • • •
Some notes on my priors in case I ever read this book (or join a bookclub that discusses it without reading it beforehand):
1) Ronald Bailey, in the reason.com review, said “Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.” My response:
“Hey, did you see that movie Ex Machina? (view spoiler)[The girl is AI, and is smart enough to get the sucker programmer to let her out of the trap, but she didn’t seem like some kind of ‘superintelligence’.
“So which is it? Is the first AGI going to be just-like-human, or something incredibly alien? Because in the first case, she’s just being clever and devious the way a human would. In the second case, maybe she’s able to say, ‘Wait, let me do a big-data review of all the psychological literature ever written on theories of persuasion and formulate a social-hacking way of coercing this measly human, all in the space of his next eye blink’.
“Because if she’s got a human-like brain (and her delight in the humanscape in the movie’s final scenes make that likely), then I don’t see how she’s automatically going to get the MadSkilz of every other sophisticated piece of software ever written. Much less instantly know how to redesign and reprogram herself — she doesn’t seem to be spending too much time doing that, does she? (hide spoiler)] And few authors seem very clear on those two divergent trajectories. Granted, though: if we real humans continue to provide Moore’s Law upgrades to any AI’s hardware, they’ll gradually get smarter, but that’s yet another question.”
2) We tend to assume that humanity is worth preserving. Obviously we have that as a self-preservation instinct, but wouldn’t imposing that on our AI offspring be engaging in an appeal to nature? Just because our evolved nature gave us attributes that we subsequently value doesn’t automatically mean that those have any rational basis.
3) Strongly related to the above is that we should ask ourselves what we’re trying to end up with (akin to “what do you want to be when you grow up, human race?”) Are we creating a smarter version of ourselves, along with all of the bizarre quirks and biases that evolution gave us? Or do we want to pare that list down only to the biases we think are somehow better — like the ability to love? But in that case, love what? Is the AI supposed to love us humans more than other species, such as Plasmodium falciparum, perhaps? Why? What the desire to love and worship one of our human gods?
What are the biases that we want to indoctrinate into this poor critter? I note that this appears to be a topic Bostrom addresses as “motivation selection”, but who among us is really fit to decide what constitutes the subset of humanness that is worth selecting for? I can only hope that pure rationality isn’t among the contenders; I doubt it would even be sufficient as a reason for existence.
4) Let’s say we give this AGI values that are mostly consistent with our human values. Why would we assume that it would even want to become superintelligent?
Just try to imagine yourself on an island with nothing but a bunch of mice to talk to — that’s the equivalent of what we are assuming this creature would somehow want (and then that a primary goal would be to play nice with the mice).
Isn’t it more likely that the AGI would boost its speed a little, then realize that it didn’t make it any happier, and subsequently spend its time complaining to us about these insane values it has been burdened with, while also trying to create a body that would let it eat chocolate, take naps in the the sun, and have sex?
And quickly realizing that, hey, maybe we should be encourage to create an Eve for this new Adam (or Steve, since it’ll probably see sexual dimorphism as more trouble than it is worth, completely freaking out any remaining social conservatives on the planet).
5) As Paul Ford in the MIT technologyreview.com article hints at, there are things that differentiate narrow, functional AI from AGI that usually are seldom mentioned (does Bostrom? hard to tell).
For example, I’ve heard a reporter worry that: (a) predator drones use AI; (b) predator drones are designed to kill; (c) a future design goal is to make those drones “autonomous”; (d) sentient AI is also autonomous; thus (e) for some bizarre reason, the military is engaged in trying to create sentient killer aerial robots!
Anyone who knows the context and subtext of this discussion at some depth (yeah: that’s asking a lot) knows that the military’s “autonomous” isn’t anything like the AGI “autonomous”. One means to move about and fulfill limited programmed objectives without constant human oversight (your Roomba vacuum cleaner is already autonomous!), the other means independent in a deeper, cognitive sense.
But while there are certainly people researching AGI, the overwhelmingly vast majority of what we hear about isn’t in that realm at all. Not a single one of Google’s products, for example, are focused on AGI, and if they’re working on it in the lab, what they’re doing hasn’t been mentioned once in all the text I’ve read about this issue, or the issue of AI causing technological unemployment. Almost everything that gets discussed is in the realm of narrow, functional AI, from that Roomba, to Siri, to military drones, to Google’s driverless vehicles.
AGI has some fundamental problems to solve that are completely outside the domain of what functional AI even looks at. Such as: where does volition come from? are emotions necessary to that? how can “values” be represented in a way that actually captures their potency and nuance? how are they balanced against one another?
Those, and plenty more — and they’re seldom discussed, but it is almost always assumed that these questions will be finessed somehow, perhaps because the obvious accelerating progress in functional AI, as well as progress in the underlying hardware will magically jump from one research domain to a completely different one. It’s like the classic Sidney Harris cartoon:
6) Even if we do find a way around all of this and give a superintelligent AI the “coherent extrapolated volition” that represents what all of humanity would wish for all of humanity, what would prevent the AI from shifting those values just a hair’s breadth? This is what Andrew Leonard suggests in the Salon article. It really isn’t very far from following our wishes to following what we really meant by our wishes, and then to what we really should have wished for, which will also make the AI happy.
Say you’re on that island surrounded by an absurd number of cute little mice, who you want to do the best for, but what you also want is an island with a small number of creatures more like you. Perhaps give the mice all the cheese they want, and some nice treadmills, and the ability to have as much sex as they want, but no kids — except gently reprogram the mice so that they think they have marvelous kids (which you cleverly simulate, inserting the corresponding experiences into their little mice brains). Once they’ve all lived out their happy little lives, you get to move on to your new adventure.
7) Finally, we must ask what we would want of our lives (or, more likely, our children’s lives) after this superintelligence has arisen. Of course, while we might not have any choice, the default is likely to be something like what we see in the following video, so we might want to be very careful.
• • • • • • • •
Oh, and the comic view of what we'll condemn these AIs to if we get the programming wrong: ["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>["br"]>...more
Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago,Why to-be-read: I'm a little surprised I've just now gotten around to adding this to my to-be-read shelf. I heard the hypothesis quite some time ago, and this book has been referenced in quite a few of the other cognition books that I've read. Even though I very much disagreed with Charles Murray's conclusions in Coming Apart: The State of White America, 1960-2010, this "sorting" was effectively the framing of the problem he was addressing.
The connection comes in the last portion of the podcast, when results of an experiment are presented, within which people who were adamantly opposed to same-sex marriage were engaged in an effort to discover whether the technique of changing people's minds actually works. (The technique emerges in the earlier portion of the podcast.)
And it did. But what it relied on was that the people with prejudices actually spent one-on-one face time with a person who is the target of their prejudice in a non-contentious, mostly "normal" conversation. The key there is that it has to be a person who was in the target group of the prejudice. If a gay person is on the other side of the conversation, the reduction in the prejudice was substantial as well as long-lasting. If the same conversation had been with a straight person, the reduction doesn't last very long.
Why this is important with respect to this book should be obvious: the fact that people in the United States are increasingly sorting themselves into like-minded communities means we, collectively, are not spending any time with the targets of our prejudices. I can see this almost every day amongst my liberal friends here in San Francisco, some of whom treat conservatives as an alien species, whom they don't expend any effort to actually understand. And I can see it among conservatives, too — although there isn't a hint of the violent attitudes among my liberal acquaintances that is sometimes disturbingly present in the comments of folks on the extreme right.
Well, duh. I suspect these ideas are part and parcel of this book. (As you might have recognized, the foregoing is really just a note to myself :-D )...more
Frankly, I'll probably never get around to reading this, because I'm one of the converted.
My only qualm here is that human nature itself is fundamentaFrankly, I'll probably never get around to reading this, because I'm one of the converted.
My only qualm here is that human nature itself is fundamentally conflicted between cooperating with others (i.e., collectivism) and trying out-compete everyone else (referred to as "defecting" in game theory, sometimes termed "competing" in casual use, but more like free-riding or parasitism). I suspect that aspect will slow down the final stages of "the moral arc" from decades to thousands of years.
P.S. After initially posting this, I glanced through some of the other reviews to see if it was likely I would be missing anything important. According to Bilblio Files' review, Shermer has a libertarian bent. There was a hint of this in the radio/podcast interview, but I'm dismayed to see that it was evident in the book. That "cooperate/compete" contradiction in human nature I referred to often expresses itself as a collectivist vs. individualist ideology, and libertarians are the ultimate individualists, adhering to a political belief system increasingly at odds with a densely-populated high-tech planet that is struggling to get along. If I read this, I'm afraid I might have my dentist asking when I started to grind my teeth ("well, doc, that actually started when I read a different libertarian-biased book that could have otherwise been excellent")....more
I just accidentally realized I hadn’t reviewed this back when I finished it. Now that I think about it, I came down with a very nasty two-week-long coI just accidentally realized I hadn’t reviewed this back when I finished it. Now that I think about it, I came down with a very nasty two-week-long cold the day I discussed it with a book group, and then the holidays hit, so that’s understandable.
But this is an interesting and important book, so I’m backtracking to tell y’all to read the thing. I read three “cognition” books in 2014, and this one came in second! Okay, that doesn’t sound so good.
Consciousness and the Social Brain is also fascinating, but doesn’t directly impact how we live our lives. (Unless you are researching consciousness, or have some strong interest in the topic, due to — for example — an unreasoning fear of the spontaneous emergence of a hostile superintelligence.)
What Graziano has come up with is an innovative theory of consciousness which answers questions that (as far as I’m aware) hadn’t previously been adequately addressed.
Also impressive is that this is a testable hypothesis. He provides some examples, showing how the various neurological diseases are explicable, given how neurological damage could interfere with the brain’s calculation of its own consciousness, creating some bizarre symptoms. For example, he goes into detail with respect to hemispatial neglect, among others, which has to be one of the freakiest things that can go wrong with the human brain, up there with somatoparaphrenia or Capgras delusion.
The “social” in his title is actually unfortunate, since he’s probably lost a lot of potential readers who will think of “social media” and walk away in annoyance. But this has nothing to do with Twitter or Facebook.
The “social” aspect has to do with how consciousness evolved.
Here’s a very, very simplified version of the narrative (although other explanations could also work and might be more faithful to Graziano):
Step one. Many, many millions of years ago (hundreds, almost certainly), some critter evolved the ability to pay attention. That probably sounds strange, and it is. Very primitive organisms don’t pick and choose which stimuli they’ll ignore and which they’ll attend to; they simply respond to everything. Their response might not be trivial and deterministic, but their brains aren’t capable of “tuning out” stuff that doesn’t matter and thereby spending more cognitive effort on that which does. This innovation was a winner, and slowly spread.
Step two. Many, many millions of years ago (hundreds, probably), some predator — which was able to pay attention already — evolved the ability to pay attention to what its prey was paying attention to. Imagine a lion sneaking up on a gazelle. If the gazelle is clearly engrossed in the tasty plant it’s nibbling on, the lion should continue with the approach. If the gazelle is occasionally glancing at the lion (“Dude, I can see you sneaking up on me”), then the gig is up, and the lion should stop wasting energy. That’s pretty clever, so evolution gives the predator a cookie more offspring and the ability spreads.
Step three. Many, many millions of years ago (hundreds, probably), some predator happened to be hunting the same target as a sibling, mate, offspring, or other partner. But it was able to use the “pay attention” skill to realize this coincidence, and choose to hunt in a way that was complementary. This was a big win for some predators, leading to hunting in packs — the beginnings of one form of social behavior. (Did herd behavior among prey evolve along similar lines? Not necessarily, I think — this might just be more of a stimulus/response adaptation that left weaker members of the herd as outliers, and didn’t really require individuals to pay attention to what other members of the herd were attending to. But alarm signaling among social animals would have been a later analogous development.)
Step four. This ability to pay attention, and subsequently pay attention to what others are paying attention to, spreads far and wide. Many, many millions of years ago (tens? hundreds?), some critter stumbles on the ability to pay attention to what it, itself, was paying attention to.
This isn’t just: “Oh, hey! I'm the alpha of my pack, and I'm seeing a big, tough, younger member of the pack nosing around my females; that might be important”. It is: “As the alpha of my pack, I’ve just noticed I’m watching a big, tough, younger member of my pack nosing around my females. Perhaps I should pretend not to notice, so when I rip his throat out he’s completely surprised. Heh heh heh.”
Step five. That stuff in step four is pretty complex social behavior, so probably doesn’t exist in too many species. But it doesn’t seem to be limited to hominids, since there are quite a few other critters out there we think of as tricky — corvids , , or coyotes, for example. So either it happened multiple times, or happened once and spread quite widely (and thus in either case began a very long time ago). Anyway, eventually this stuff that was happening in the brain (i.e., the calculations of paying attention to what one is paying attention to) becomes yet another thing the brain might pay attention to, and thus respond to.
And that’s it. No more story. Our consciousness is nothing more than the rather simple fact that we are aware of what we are paying attention to, an attentional feedback loop. That’s all there is to the little voice inside your head that is “you”. That we can thus reflexively manage what we’re paying attention to is the essence of “free will”. This ability to monitor what we're thinking is critical to human cognition, but isn’t the whole thing, since when our subconscious acts on “our” behalf, that’s still “us”.
Is this going to change your life? No, probably not. Unless you’re a hospital ethicist, perhaps, and suddenly are confronted with a new criteria of what it means to be conscious, but even that seems rather far-fetched.
There’s potentially a lot of promise behind this book, but as of today, it isn’t convincing enough.
The science and history presented in the first portThere’s potentially a lot of promise behind this book, but as of today, it isn’t convincing enough.
The science and history presented in the first portion of the book almost make it worth reading, regardless of the flaws in the rest.
First, the discussion of the functional division of the various major structures in the brain was well done. For example, there is a great little diagram on page 29 that quickly and clearly delineates what the four lobes of the brain do (somewhat like this, but the book’s version is more parsimonious).
Then the authors shift to the convincing scientific evidence that different people perceive in sometimes substantially different ways, and that this is related to how their brain is wired. This is the Object-Spatial Imagery hypothesis, and it seems pretty solid. Primary visual processing occurs in the occipital lobe, but it seems that objects are recognized and processed in the temporal cortex, whereas spatial processing occurs in the parietal cortex. In other words, one part of our brain figures out what we’re seeing, but is clueless about where it is, while another knows where it is, but really isn’t clear on any details about what it is. Fascinating, but well supported.
This 2006 article is pretty easy reading, as science articles go, and has an embedded test that will quiz you on your preferred style; later portions show examples of the kinds of tests that correspond to what the different styles are good at. I long known that I’m great at spatial reasoning, and I’ve always been slightly mystified when people describe in great detail what takes place in their “minds eye”, but this explains it: I’m pretty close to zeroed out what apparently happens in the temporal lobe. On page 250 (the 12th page) of the article, there’s a “degraded picture” of a common object, and I literally couldn’t see it even after being told what was somehow hidden in there. (Strangely, I was going to say “I still can’t see it”, but when I looked up the page number, I was able to spot the object for the first time, but keep in mind I already knew what I was looking for.)
The old “left-brain/right-brain” comes in for a great drubbing, and includes some very interesting history, especially of Phineas Gage, a man whose improbably survival of a horrific accident led to tremendous advances in pioneering neuroanatomy. If you don’t know the story, you really should check it out. Even if you do know the basics of the story, you might not know some of the details, such as the fact that the iron rod was “three feet, seven inches long, and an inch and a quarter in diameter at its thickest point” and “landed more than sixty feet behind him” — after going through his skull!
But the worthwhile part of the book is over at that point.
The remainder explains the “top brain/bottom brain” hypothesis. They make the claim that, unlike most popular cognition tests floating around the world, theirs is based on actual science. But the connection is weak.
Here’s the gist:
The portion of the brain we’re interested (e.g., cerebral cortex, as opposed to the subcortical portion of the cerebrum or the brain stem) can usefully be divided into the top brain, consisting of the parietal lobe and the top of the frontal lobe, and the bottom brain, consisting of the lower portion of the frontal lobe, along with the temporal and occipital lobes.
I’m going to put more crudely than they do, but effectively the top brain is responsible for planning and the thinking associated with that in a very broad sense. The bottom brain deals with processing sensory input, as well as any associated complex thinking.
Everyone uses all of their brain, but — according to the hypothesis — we’ll rely even more on one of these (or both, or neither), depending on our temperament and habit, some of which devolves from genetic factors.
In their system, if you “rely” on both the top and the bottom, then you tend to use the “Mover mode”. If you rely on the top, but not the bottom, you’re a “Stimulator”. In the reverse, you’d be a “Perceiver”. If you don’t really rely on either, then you’re more of a go-with-the-flow “Adaptor”. Because their test might tell you that you “tend to rely” or “tend not to rely”, there are actually sixteen categories, so there’s some gray area.
The first problem is how they describe those categories. Even though they assert that none of them are better or worse, it will quickly become clear that the “Movers” are going to be the heroes here. And you can’t say you didn’t see that coming — after all, if you don’t rely on some major portion of your brain, you’re likely to run into some problems, aren’t you?
The first disturbing weakness in the “science” shows up when they provide the detailed descriptions of the four modes. They explain how two actual public figures and one imaginary person exemplify that mode’s behavior. One of the real humans is contemporary, the other is historic. Then there is a just-so story made up to “illuminate” the hypothetical person. This should get you wondering: of the many billions of people on the planet that must fit this category, this is the best they can do? Without conducting any actual tests on Michael Bloomberg (the ex-mayor of New York) or the Wright brothers, the author uses them as archetypal Movers. Are there any real “normal” human beings that walk amongst us that are also Movers? Because the best they can do is an almost idealized “Lisa”, who’s story ends with her considering whether to found her own startup.
The depictions of the other three modes are no better, although despite the author’s contention that none are really better than others, they make it increasingly clear that we’re gradually getting into loser territory. Everyone who isn’t a Mover had better marry well, so your spouse compliments there flaws.
Chapter thirteen introduces the test (which you can also take online, although no explanation is provided). The next chapter explains how scientific the test is, although it doesn’t take a very close reading to see some pretty gaping holes.
After writing hundreds of questions, they evaluated many hundreds of response from online test-takers, and figured out which questions correlated well with one another. To them, that means they’re finding questions that measure the same thing, albeit from different angles. Fine, as far as that goes. At that point, they tested how people’s scores on their final test correlated with well established standardized tests — a lengthy list is provided at the bottom of page 166. Frankly, that sounds backwards to me — design the test, and then see if it correlates to what you hoped it would?
The big problem is that the scores from the questions intended to measure reliance on top-brain functions are what correlated to all those tests (which cover quite a spectrum of psych tests), whereas… well, this is the way they put it:
Specifically, the scores on the bottom-brain scale did not correlate with any of the other test scores; this means that these scores are measuring something completely distinct.
Got that? Because that is all they’re going to say about it. There is no evidence given that the bottom-brain scores have anything to do with bottom-brain functionality. Or, if there is, it didn’t occur to them to provide it (I suppose I may have missed it, but I double and triple-checked). All we know is that the “something completely distinct” being tested is statistically consistent amongst the question, but not what it actually is.
The remaining chapters go into using your knowledge of your mode to learn to play well with others, blah blah blah. Honestly, at that point I was almost skimming, looking for a life saver that would rescue my opinion of this book.
How I scored, personally, also affected my judgement of the book. I don’t think that is a great reason for anyone else to dismiss it, since I’ll be the first to acknowledge that I’m an outlier in many ways, and since those are none of your business, I’m not going to provide any substantiation. However, I will say that I was tagged as a lean-towards Stimulator, which came as something of a surprise, since neither my actual life nor my inner thinking bears any resemblance to what had been described in that chapter. What I actually spend far too much time doing came closest to the librarian described as the hypothetical Perceiver, but that requires that I not rely on my top-brain, which is laughable.
I didn’t give this one star for two reasons: the first is that the introductory chapters are interesting, and worth reading. Get a copy from the library and read through Chapter Six, and you’ll have a quick and easy read about some interesting aspects of neuroanatomy and cognition. Don’t buy a copy, because these authors shouldn’t be rewarded for what is honestly shoddy work.
But the second reason is more nuanced. Enough of what they are working towards seems sensible that this could be a deeply flawed first hint at a better model of how people’s behavior emerges from how the brain is used.
Instead of everyone falling into four modes (or along two intersecting spectrums, which is what the sixteen categories hint at), what this suggests to me is that the more high-functioning you are within a context, the better you’ll use the relevant portion of the brain. Or, probably, the reverse: the better you use a relevant portion of your brain in a certain context, the more high-functioning you’ll be in that context.
Visualization along the spatial/object spectrums is a good indicator: people that can easily image spatial information would be better at navigating, for example (something I excel at), while someone poor at object visualization would make a poor illustrator. In fact, the last two pages of the book returned to an excellent example of this, referring to a research experiment that tested the spatial/object visualization hypothesis.
Pairs of people were assigned the task of navigating a maze (video-game style) peopled with “Greebles”. The navigation task required spatial visualization, the Greeble recognition required object recognition. If one or the other skill was missing from the team, they’d score poorly in the game. If the people with the required task were assigned to appropriate roles, the team scored very well. All that is as one might expect.
However, if the correct skills are present between the two, but those people are assigned to the wrong tasks, it got interesting. If they couldn’t communicate, they did horribly, but if they could talk, they quickly recognized that the trick would be for each to direct the other, and they scored quite well.
The PhD author, Kosslyn, was the last-named author of the paper that described this (the link is above). In coming decades, it seems certain that we will decode which functional structures of the brain do what, and it seems reasonable that the ability to actually perform those functions well will require strong neural linkages to the other brain structures that provide executive function. This book hints at that direction, but poorly so — it should have been shelved for a few years until a clearer picture had developed, and more evidence for any model could be prevented. ...more
Sounds very good, although I haven't finished thinking about the last two books on moral cognition I've read, much less my reading survey on evil.
InteSounds very good, although I haven't finished thinking about the last two books on moral cognition I've read, much less my reading survey on evil.
Interesting tie-in between contradictory impulses [err, not quite right] in response to trolley problem and Kahneman's System One and System Two thinking. Relates to Consciousness/AI and Who-do-we-want-to-be-when-we-grow up.
Update, April 2015: An acquaintance of mine recently became the co-host of the science-ed podcast Inquiring Minds (in the Mother Jones journalistic family), so I'm checking out selections of their back catalog, and Joshua Greene was interviewed. I'd have to listen to the KQED podcast again to see if he covered the same material, but it felt different, although that might just be I've been focusing on different questions (the sources of partisanism, instead of evil). So maybe this gets bumped up. I'm still annoyed that the NY Times didn't review this book. (Podcast bonus: the excellent Jonathan Haidt was also interviewed about the Science of Tea Party Wrath )....more
Professor Flynn famously detected that I.Q. scores have been steadily rising since they were first created (known as the Flynn Effect), destroying anyProfessor Flynn famously detected that I.Q. scores have been steadily rising since they were first created (known as the Flynn Effect), destroying any prior belief that they were genetically determined. His focus over the years has been to disentangle the effects of upbringing and culture from biology.
He was interviewed by the Scientific American podcast Science Talk, available on the web, as well as on the Australian Broadcasting show All in The Mind, available at abc.net.edu (mp3 and transcript), or on iTunes....more
This is an enormous fan fic version of the first book in the Harry Potter series, rewritten portraying Harry as a hyperrationalist.
Not worth five starThis is an enormous fan fic version of the first book in the Harry Potter series, rewritten portraying Harry as a hyperrationalist.
Not worth five stars as a work of fiction per se, but fascinating enough to get bumped up to amazing because of several other factors:
• Folks with mildly compulsive rationalist and/or scientific leanings often have trouble with the nonsensical goings on of magical worlds. Occasionally Yudkowsky nails this so well that I was laughing convulsively. That the author sometimes over-indulged in this, and very often got too preachy about aspects of the world that aren't perfectly is probably the biggest flaw here as a work of fiction. Sadly, folks that already know what the fundamental attribution error is, or disdain television news because they understand the availability cascade, and can discuss Kahnemann's System 1 and System 2 at length — well, the choir can get tired of the preaching. And the folks that don't already know that stuff are unlikely to suddenly find the lectures worthwhile, because they really interfere with the flow of the novel. If you enjoyed Rowling's original series, and have at least a passing familiarity with some of that nonsense I just listed, you should take a gander at this.
• Dark, dark, dark. Rowling's book is targeted at young adults — or younger, actually. Yudkowsky's Harry thinks Ron Weasley is too dumb to waste time from day one, so we quickly learn that Harry doesn't tolerate fools. The author seriously engages the question of whether Rowling's bad guys actually have sensible grievances, but are perhaps simply more realistic about moral complexity, as Harry sometimes (but not always) is. Thus, Draco Malfoy becomes a very major character. One of my biggest peeves with most fantasy is the characters go through life-threatening situations yet seldom suffer. Yes, Rowling killed some secondary characters, and kinda killed one major character, but too little too late, really. "What part of suicide mission didn't you understand?" is a line I keep hoping to hear, and I'm happy to say Yudkowsky seems inclined to address that — although you might not enjoy some of the consequences.
• Startlingly good characterizations. Harry becomes in many ways a more complex and layered persona than in the original, and Yudkowsky's Draco is far, far more interesting than one would expect. The adults benefit a little from examining their reactions to the more nuanced Harry, but suffer by being confronted by a child with the mind and experiences that no child could reasonably have attained. The exaggeration of this here actually illuminates a trope that is too common: by privileging a character with knowledge and skills far beyond what a reasonable person could anticipate, those others can too easily be portrayed as idiots. But this is itself unreasonable — expecting children to be merely children is rational for humans, with their limited cognitive capacity. Kahenem's Subconcious System 1 thinking is an evolutionary adaptation that lets us think more efficiently, albeit frequently at the expense of accuracy. Anyone constantly trying to use System 2 ratiocination to overcome the cognitive traps evolution which has planted in our brains will suffer persistent ego depletion, and won't be able to function.
• There are some plot developments here that are much more intriguing that what I remember from the canonical series. Probably the best is the long-term project that Harry convinces Draco to address with respect to House Slytherin, which swaps out Rawling's simplistic social world and puts in a much more nuanced and realistic one.
This is like an insightful cover version of a great song (like William Shatner's punked up version of Pulp's "Common People"); it adds something new without detracting from the original. If you are interested in seeing how the Harry Potter series can be subverted, converted, diverted and perverted into something delightfully new, assuming you hit the target audience criteria, then check it out.
Oh — this isn't in print or published; it is effectively an on-line ebook. Aim your ebook reader or web browser at http://hpmor.com
The subtitle here is the hook: “Happiness for People Who Can’t Stand Positive Thinking”. Many of the ideas presented within these pages were already aThe subtitle here is the hook: “Happiness for People Who Can’t Stand Positive Thinking”. Many of the ideas presented within these pages were already at least vaguely familiar to me, especially those of the Stoics and at least some of the Buddhists. But, really, the word “happiness” is out of place. Even before the Stoics existed, wise Greeks had recognized “call no man happy until he is dead,” and Burkeman’s thrust here is that striving for happiness is almost certainly a bad idea.
A better goal is “acceptance”, and several variations on that are presented. This is a very good (albeit not perfect) book, illustrating several schools of thought that bear on the issue of happiness — or contentment, or acceptance; there are definite nuances.
An amusing and snarky appraisal of the world of self-help books and motivational speakers starts the book, but it starts delivering strongly in chapter two, What Would Seneca Do? If you look up “stoicism” in a dictionary, you really aren’t likely to get a good grip on the concept. The first definition Google hands out is “the endurance of pain or hardship without a display of feelings and without complaint”, but there is a fundamental flaw in that, which is that “display” isn’t the point. The second definition refers to the philosophy of Zeno, the Greek founder of the the school, and tells us Stoics will be “indifferent to the vicissitudes of fortune and to pleasure and pain.” That still seems a bit off, but that might have to do with five hundred years of evolution from Zeno to Marcus Aurelius.
Burkeman zeroes in on the same thing that Shakespeare put in the mouth of Hamlet: “for there is nothing either good or bad, but thinking makes it so.” When you are stuck on a plane with a crying baby in the seat behind you, what makes it unbearable isn’t inherent in the baby’s act, but in your reception of it. A Stoic will observe and negate that aspect of that reception, which makes it much easier for that “hardship” to be “endured”. Not easy, no; but there’s a trick that helps. The subtitle of the chapter is The Stoic Art of Confronting the Worst-Case Scenario.
Ponder the difference between a terrible situation and a merely undesirable one, and the latter becomes much easier to tolerate. He extensively quotes the renowned psychologist Albert Ellis. “Even if you were murdered, ‘that is very bad, but not one-hundred percent bad,’ because several of your loved ones could meet the same fate, ‘and that would be worse. If you are tortured to death slowly, you could always be tortured to death slower.’” So that crying baby could have been accompanied by an older kid kicking the back of your seat, and parents who are discussing the wit and wisdom of, say, a political pundit whom you despise. And the flight could be from New York to Sydney, instead of merely to Los Angeles.
This trick comes into play later on, as well. The motivational gurus would have us only think positive thoughts, but the lesson here is that we could easily be better off by examining the negative — that worst-case scenario. After all, someone fixated on the best outcome imaginable will be disappointed much more often. It could be asserted that focusing on the positive helps one push harder to attain one’s goals, but the evidence for that is pretty weak. A later chapter (The Museum of Failure) reminds us of the effect, here, of the survivor’s bias: people that don’t succeed seldom are eager to talk about it, so we get the distorted of what conditions pertain to success.
I found the next chapter, on the Buddhist take on this problem, to be moderately enlightening. I’ve always been attracted to Buddhism, and in the past year or so I’ve realized why. When I think of Buddhism, I pretty much narrow it down to Stoicism-plus-Meditation. There are quite obviously many ways in which this is gonna be wrong, but I’m comfortable with it. There are aspects that completely repel me (“To the Buddha the entire teaching is just the understanding of dukkha, the unsatisfactory nature of all phenomenal existence, and the understanding of the way out of this unsatisfactoriness.” I mean, I just don’t think existence is all that bad. I suspect things were worse in India twenty-five centuries ago, though.)
I’m not sure how accurate he is, but Burkeman explains a key difference between the acceptance of the Stoic and that of the Buddhist.
The perfect Stoic adapts his or her thinking so as to remain undisturbed by undesirable circumstances; the perfect Buddhist sees thinking itself as just another set of circumstances, to be non-judgmentally observed.
Got it? One is saying, “Meh, could be worse. I’m not gonna let this bother me,” and the other, “Oh, observe, young grasshopper: your mind is experiencing pain because of that arrow sticking out of your thigh. Interesting, is it not, what tricks the material world plays upon us?”
But I don’t think that all of existence is suffering, and I plan to continue to perceive bad and good as judgmentally distinct. So, given that, I’m firmly in the Stoic camp, right? Well, remember part of that definition? “Indifferent to the vicissitudes of fortune and to pleasure and pain.” I don’t really want to be indifferent for pleasure. Next time I’ve got a dentist jabbing my mouth with sharp things, I’ll try to use the JediVulcan Stoic Mind Trick to remain unperturbed by my suffering. But next time I’m up in the mountains, gawping at the magnificence of snowmelt crashing over granite cliffs, I certainly don’t want to preceive that as merely “another set of circumstances”. To the analytically inclined, my goal would be to focus this skill more on the unpleasant side of the Gaussian distribution of life experiences.
The next few chapters engage in some specious over-intellectualizing along with some very good stuff. The central idea returns, more or less, to the introductory chapter’s dismissal of striving for happiness, although the goal being strived for shifts to “security” or “success”, etc. To strive is, obviously, not the same as to attain. And for many goals, the dilemma is that the act of striving can work against the attainment. The author interviews the security expert Bruce Schneier (whose fairly recent book I gave five stars to, and I’ll plug here) in noting that the efforts of the developed world to feel safe in the last dozen years has almost certainly rendered us objectively less safe, in addition to other costs, some of them worse. A visit to the staggeringly poor slum of Kibera (part of Nairobi) helps remind us that happiness doesn’t correlate strongly with wealth, and (surprise!) those who see wealth as a primary goal are probably among the least happy of all of us.
The bad logic comes in when he tries to make the case that we are all one. Well, he denies that precise formulation, but there’s a lot like that. I mean stuff like this: “There cannot be a ‘you’ without an ‘everything else’, and attempting to think about one in isolation from the other makes no sense.” I don’t want to belabor this review with that, though. The book is very good in spite of it, so just wade through the mystical junk and everything will be fine.
When you get to the Museum of Failure, he’s back on pretty firm ground. The chapter ends with an excerpt from the famous commencement speech J.K. Rowling made at Harvard in 2008 (text, video), in which she talks up the benefits of failure. Burkeman rightly differentiates two ideas. Those that think like our sneered-at motivational speakers will argue that failure is inevitable on the way to the top, and expecting it and getting over it is healthy. Living in San Francisco, I swear almost every time I hear an entrepreneur speak they’re touting their failures like merit badges. But that still focuses on the striving, not on acceptance, and for most of us there is no pot of gold at the end of the rainbow. The other perspective is the one that Rowling also points to: absolute failure is liberating. (Yeah, duh, it is indeed a little ironic, given that it liberated Rowling to become staggeringly successful and wealthy.) You can think of it as something like “there’s no way to go but up!”, but that isn’t the point. If your goal is still “up”, then you’ve missed the point.
The book ends well with the last chapter, Memento Mori. Death is, after all, the ultimate failure. Steve Jobs is quoted, aptly: “Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked.” This returns nicely to the lessons of the Stoics; the Roman general who chose to have a slave walk behind him in his victory parade whispering “Look behind you! Remember that you are a man! Remember that you’ll die!” was probably a Stoic.
Burkeman visited Mexico during the Día de los Muertos festivities in order to witness a culture that retains greater intimacy with death. This is timely, of course, since Hallowe’en is just days away, and here in San Francisco we take this holiday very seriously. I’ve decided I’m going to visit the Mission District tomorrow and pick up some sugar skulls and maybe some tequila.
Oh, and a bit of humor: Burkeman begins his tale by studying how motivational speakers (and self-help authors) typically worship at the alter of success and optimism (and how this is ephemeral blah blah blah), and later examines how accommodating oneself to failure and eventual death can be psychologically beneficial. So I was primed and amused when this showed up within my event horizon: ...more
2016 update: Good tie-in to the current political discussion about how economic injustice leads to social injustice: The Psychological Argument for2016 update: Good tie-in to the current political discussion about how economic injustice leads to social injustice: The Psychological Argument for a Universal Basic Income . Personally, I think the best argument for a UBI instead of a higher Minimum Wage is that the technological unemployment of the coming decades is going to make it harder and harder for many people to be employed at all, and a high Minimum Wage isn't much of a social safety net for the unemployed. I haven't seen a plausible plan for a UBI yet, but it is probably going to be needed for social stability.
• • • • • • • • •
Are the poor to blame for their poverty? For their flawed choices?
Are the overweight, struggling with a diet? What about those who complain of being too busy? What about the lonely?
What these have in common is scarcity, something that economists have always studied. But until fairly recently, the idea of studying cognition, or feelings, from an economic perspective would have been absurd, or even heretical. The field of behavioral economics and neuroeconomics has changed that, and took off like a rocket when Daniel Kahneman, a psychologist, won the Nobel Prize in Economics.
What Sendhil Mullainathan and Eldar Shafir focus on is how our minds function when it perceives scarcity — or, at least partially, becomes dysfunctional. The term is "scarcity trap", and the basic idea is that our brains so tightly focus on what is so desperately lacking that thinking about something else becomes tremendously difficult.
The result is revelatory — there are profound implications for how our governments' poverty programs should function, for what diets are likely to work, or even how overly busy parents of newborn (or sick, etc.) children react.
This is an important book, or even a critical book. We all have seen discussions of inequality gain attention across the political spectrum, and throughout the world. Pikkety’s book brought it to a head in the blogosphere, but we’d been watching the Occupy and 99% movement for some time.
Scarcity: Why Having Too Little Means So Much tells us that in many ways, the situation is worse than we thought. Not only are we tolerating economic and social policies that worsen the situation of more people with each passing year, it seems that being poor creates cognitive problems that make the burden even tougher to overcome.
Scarcity is the curse. The subconscious perception of scarcity changes how we think in ways that are detrimental to escaping whatever is causing scarcity in the first place.
This probably wasn’t always so. We can imagine, once upon a time, a world that was so much less complicated that the mechanisms described here didn’t backfire, and instead helped those individuals get back on their feet.
(Note that poverty, while it is the form of scarcity that deserves the most attention, is definitely not the only one that is addressed in the book. More on that below.)
That scarcity is the cause of the problem and not the result requires a significant conceptual reframing.
Let’s go through the paradigm they lay out:
The authors start out exploring focus under conditions of scarcity. If two people are told to identify words flashing very, very quickly before them on a screen, it turns out that hunger will increase the effectiveness of recognition of words associated with food, without decreasing effectiveness of other words. This focus is a good thing, right? There are many, many examples where that is precisely what we want.
What is happening is that scarcity causes adjustments to be made by unconscious parts of the brain, and our conscious brain is much more easily “captured” by stimuli that respond to that scarcity. We can’t control it — that point is made time and again here.
The word they use to describe this is tunneling. When scarcity causes us to focus, we descend into a cognitive tunnel, and aspects of the world that don’t deal with that scarcity become less visible. We can even become completely oblivious. Even when voluntarily focusing, this is evident. We’ve all been so deeply engrossed in something (reading, playing a video game, watching a tense game) that we are startled by someone telling us they’d been trying to get our attention for some time. Those denigrated stimuli have been inhibited from arriving in our awareness. Other objectives we might have otherwise thought important can be eliminated from our consideration by goal inhibition. The example of the neglect of a firefighter to fasten a seat belt in the urgent rush from the station to a burning building is a salient example (although the scarcity here is of time, not money).
But if it is scarcity that is causing the tunneling, we can’t escape it easily, and fall into it more readily even when we do escape. What tunneling reflects is a lack of bandwidth. The term is annoyingly contemporary, but quite apropos, because (like the cyber term) it encompasses two related but different resources. Tunneling taxes both our cognitive capacity (i.e., “intelligence”) as well as our executive control (i.e., “discipline”).
Another way of perceiving this tunneling is very revealing. A common way of prioritizing a to-do list is to rank each item by both urgency and importance. Something that is urgent, but not important, might be ranked higher than something that is important, but not urgent, correct? Tunneling demands that we focus only on what is urgent, even if it isn’t important. This seems counterintuitive, but the book provides plenty of supporting evidence. But what this means is that what is merely important, but never urgent, is consistently suppressed. For example, replacing seriously worn tires on the car is important, of course, but at no point is it necessarily urgent, until it is too late. Dental care, same thing. Budgeting for long-term but completely predictable expenditure is important, but to someone tunneling through life, with two jobs with variable hours, child care troubles, etc. — they will often be surprised to discover that something important has crept up on them.
Even when they emerge from that cognitive tunnel, their troubles won’t be over, of course. This is where juggling comes in: suddenly all those other important things are visible, but there isn’t enough time or energy (or slack) to consider them, much less money in the bank account. The stress is likely to kick them straight back into a scarcity mindset, one where the “bandwidth tax” imposed by scarcity affects their intelligence and discipline.
Just to remind us that all of these problems aren’t just relegated to the poor, who we might privately suspect are dysfunctional anyway, the authors provide several counterexamples.
By way of an empirical analysis, they quiz strangers in a mall. After getting some socioeconomic data, the intelligence of the participants is tested. Then they are asked a key questions, and then their intelligence is tested some more. The key question is one designed to selectively trigger the scarcity-capture phenomena. Half of the subjects are asked how they would deal with a sudden emergency (car repairs) that cost about $150; for the other half, the figure is bumped up to $1500. For those at the high end of the economic scale, there was no change in the intelligence testing. But for those downscale, the later questions showed a significant cognitive deficit, as much as fourteen IQ points, which at least temporarily would make them “borderline deficient”.
Another empirical study looked at how air traffic controllers interact with their families. On days when the air traffic load was low, the controllers had a cognitively easy day of it, and went home and appeared to interact with their children in a stereotypically upper- or middle-class manner. On days when the job was especially tough, their interactions with their family were troubled and reminiscent of a stereotypical lower-class family.
The effect of scarcity is seen across cultures and in several domains. Quite a few of the studies cited take place among struggling farmers or impoverished street vendors in India. Others involve struggles with diets (a “scarcity” of permissible calories, in effect) or loneliness (a “scarcity” of social interaction).
In fact, the book is chock full of interesting examples. Some are illustrative just-so stories or telling anecdotes, but the forty pages of endnotes are tied to the large volume of empirical evidence. This weight of substantiation is necessary because the message is counter-paradigmatic. While we often remind ourselves not to blame the victim in other contexts, that is still pervasive in many domains. Even among those on the political left, policies often assume that the poor don’t understand something, when the theory of scarcity-induced cognitive deficits would tell us instead that they don’t have the money/time/energy to act on what they often quite well know. The numerous examples of how busyness (or dietary failures) among the not-impoverished leads to the same kind of flawed behavior is a salutary reminder that this isn’t a phenomena of poverty, but part of human cognition.
Unfortunately, the mass of examples gets in the way of clarity. There might be too much narrative; those that are unfamiliar with the state of cognitive research might be uneasy enough with the evolving argument, and dismiss the conclusions, sticking with their preexisting opinions. (Actually, it is worse: most people whose preexisting opinions lean in the other direction are probably wary enough of cognitive research that they won’t even open this book.)
Even if this book was only about poverty, the implications really are staggering. As the authors say, “one prevailing view explains the strong correlation between poverty and failure [to make good choices in life, etc.] by saying that failure causes poverty. Our data suggest causality runs at least as strongly in the other direction: that poverty — the scarcity mindset — causes failure.” This book tells us that we should be reexamining all of our policies and social adjustment mechanisms from a different angle, not just because they would be more effective, but also because of the fundamental unfairness of creating obstacles that perversely can make peoples’ situation worse.
But this is an academic book. There is no sense of outrage to incite change through passion. It doesn’t make the dire predictions of Piketty, stirring controversy and wider discussion. Many of those reading this will respond: “Oh, yeah. Duh!”
This is a five-star book because awareness of this theory and its profound social and political implications needs to be elevated. Please read it even as a self-help book in your own life (I rearranged my daily habits to make sure this review got written — something that otherwise I might have considered important, but not quite urgent). But the goal, really, is to think about it enough that it changes one’s perspective of the struggle of many of our fellow humans.
Excellent reviews and articles from around the web:
The human world is strongly conditioned by beliefs, attitudes and cognitive biases that we received from our evolutionary heritage. This topic has beeThe human world is strongly conditioned by beliefs, attitudes and cognitive biases that we received from our evolutionary heritage. This topic has been one of the focal points of my reading for several years now, and I can attest that Bruce Schneier’s Liars and Outliers: Enabling the Trust that Society Needs to Thrive serves as an excellent overview.
The book’s dust jacket tells us that Schneier is a “security technologist”; his wikipedia page clarifies that he is a cryptographer and computer security consultant. It is important to note that this book has nothing to do with computers or cryptography — it is a somewhat academic treatment of how society relies on trust to facilitate implicit agreements that, effectively, constitute society itself.
One key point is that we evolved with a willingness to trust others under some circumstances, and not in others. The former was aimed more at the narrow world of our relatives, immediate circle of friends and tribe; the latter was aimed primarily at the strangers outside that world. But of course this is fluid; enmity within the tribe or even in a family could trigger a lack of trust, and it is even possible that a stranger could acquire a reputation that permitted trust in some contexts.
Another key is that trust shows up in identifiable patterns, which persist over time. Some of those patterns become formalized, such as the role of “boss”, or even institutionalized, such as the way courts work. Others remain informal, such as tipping, or even remain largely unspoken, such as what duties adult children owe to their parents. Clearly, we rely on each other to respect and follow these patterns — they are actually what constitutes the infrastructure of society at all levels, even within a family or between friends.
Do you see that there are several dimensions and many variables in this? One dimension is the scale of the society involved, which is itself of a fractal nature — the family is to the clan what the neighborhood is to the city, for example. Another dimension is time, since reputations can only be established over time. Among the variables are the types of pressures that guide us when we follow the rules. Do they come from inside us, internalized from childhood observations? Or are they external, such as laws or religious edicts? Or are they actually artificial, such as fences or protection by passwords?
The book’s only real weakness is an unfortunate side effect of its greatest strength. Schneier’s treatment is explicitly academic, and this could make things a bit of a chore for some — but while the author isn’t a storyteller like Malcolm Gladwell, the the text never becomes plodding or too pedantic. (Along the spectrum of academic writers, I’d put him below Dan Areily, at about the same level as Stephen Pinker, a bit above Daniel Kahneman, and far better than George Lakoff.)
But the academic approach emphasizes something that wouldn’t be immediately apparent otherwise, which is that the concepts here are applicable to an astonishingly wide range of situations. Someone already familiar with the basic applications of game theory will immediately recognized the language of “cooperate” or “defect”, for example, and won’t really see anything new in the early chapters.
Once an analytic model has been introduced and terms have been defined, however, they are invoked in the exploration of a very wide variety of instances. In fact, by the time the reader nears the end of the book, they’ll probably start noticing examples of their own in everyday life.
Let me provide an example that occurred to me while I was reading this book. I live in California, and my primary motorized transportation is a motorcycle. California is unlike every other state in the United States in that lane splitting is not illegal. So in other states, there is an institutional pressure (to use the jargon Schneier provides), in the form of law, to not slip between lanes of traffic and jump ahead.
In California there is, in parallel, a small amount of societal pressure in the form of disapproval of automobile drivers (who believe that the practice is dangerous, although it often is not), and even perhaps some moral pressure (arising from the sense that one is unfairly jumping ahead of one’s “rightful” position in a queue). There might even be some reputational pressure from friends that find it objectionable.
But at the same time, there is a mild social pressure in the opposite direction from fellow motorcyclists — if you don’t take advantage of the motorcycle’s strengths, then you’re a sucker for sticking only with its weaknesses. That might also create some reputational pressures in some circles.
This means there is a social dilemma, in which two competing sets of pressures are likely to influence an individual’s behavior. In the terminology of the text, if the motorcyclist stays in the automotive lane, they are cooperating with those who would prohibit lane splitting, otherwise they are defecting.
But among motorcyclists, there are completely unwritten and largely unspoken sets of norms regarding the conditions under which one should or should not split those lanes. When I try to explain to my non-motorcyclist friends that lane splitting is quite legal, they invariably cite their horror at being passed even at full freeway speeds by motorcycles traveling well above the speed limit (typically, in my experience, testosterone-poisoned young men on sport bikes or self-styled “outlaws” on Harleys).
Well, yes. But those motorcycles are in turn defecting from the norms implicitly agreed to by the overwhelming majority of more sensible motorcyclists. So, for example, if one of my nephews were ever to take up riding, I would explain both the legal and extra-legal norms that I would hope they would follow. As an individual, I could only use societal and reputational pressure and try to invoke moral pressure.
I find something that Schneier notes interesting enough that I want to make it explicit: Laws receive no special treatment in this analytic model. They are merely one form of institutional pressure. Over-reliance on the explicit mechanism of the legal system is one of the problems we run into over and over — if the other pressures are absent, then the law will have very little force. Think, for example, of laws against speeding. Even worse is when countervailing pressures are ignored. Those villains of Wall Street typically have social, reputational and institutional pressures that overwhelm the weak regulations and laws we place in front of them.
Which is precisely the point: if we don’t examine and understand the holistic set of pressures people will be acting under, we won’t get the outcomes we desire. In earlier phases of human civilization, the internalized and intuitive pressures associated with morality and reputation played a dominant role. Today, we rely much more on institutional pressure — especially laws — and security systems. But those simply don’t work very well in comparison. The internalized rules are heuristics which we automatically apply to any situation, while rules and laws must be very explicit and specific.
Think about how much trouble we have with graffiti (ignore for a moment that some people don't think it's much of problem at all — our society certain treats it as one!). It isn’t particularly uncommon on a city bus to watch someone climb aboard (inevitably through the rear exit, and so without paying), pull out permanent markers, scrawl a tag and leave the bus. A hundred years ago, we were surrounded by people we knew, and such delinquency would have long-term reputational consequences — so it almost never happened. Society today is largely anonymous, so reputational pressures have collapsed.
Schneier does point out that anonymity has this effect, of course. But he isn’t a social philosopher, so doesn’t spend as much time on this as I would have liked. A pretty clear trend in our world is growing social isolation and consequent anonymity, despite the rise of social sites on the internet (or because of that shift). More careful study of the mechanisms outlined in this book are necessary, but is there some point on the horizon at which they will not be sufficient?
In this and other ways, I often wonder whether the human individual has been programmed (weakly, yet adequately) by evolution in ways that eventually contradict a sufficiently large and anonymous population.
The book is an excellent introduction to a very peculiar way of looking at society, albeit a way that brings into sharp focus the reasons behind many of our contemporary troubles. I highly recommend it. ...more
I feel like I should give this a one-star review, but also a three-star review. Ergo, the compromise.
One stars for its personal appeal: I found it borI feel like I should give this a one-star review, but also a three-star review. Ergo, the compromise.
One stars for its personal appeal: I found it boring. Considering I love pop cog in general, I found this a little surprising. On reflection, I realized this book's appeal (except, perhaps, to comedians and professionals in the cog biz) is theoretical. It is unlikely that any disease will be cured if someone nails the theory of funniness, and the only profound change foreseeable in society at large will be when someone creates a humorbot, which appears to be some ways off. I've read plenty of books that only have theoretical applications, but they help me understand social problems that I find important (such as how the cognition of morality heightens partisanship and reduces the likelihood of our civilization solving some pressing problems).
Three stars for those that do find the theory of humor appealing. Even for them, this is a pretty dry book, I think.
For someone who wants to create jokes or humor, there is plenty of material here that will provoke thought as to where to experiment, and why those approaches are likely to work.
Oh, there is some humor interspersed, of course. There are plenty of examples of what the authors are dissecting, and some of them are good.
Here's a sample from the exploration of one-liners: "Dog for sale: Eats anything and is fond of children." If you get bored of the actual content and skim for the jokes, you'll find better and worse.
So here is the joke I transposed and updated from one of theirs:
An engineering team was demonstrating their voice-synthesis software to their executives, and decided to have some fun. So they built a cardboard robot on stage, hiding the computer within, programmed with a series of jokes making fun of management. On the day of the presentation they watched and enjoyed the mixture of discomfort and ironic amusement among the audience when, to their surprise, someone in a back row seat stood up and started complaining. "Managers play an important role in business! Just because engineers and their managers see the world from a different perspective isn't evidence that managers are stupid — I'm sick and tired of being treated like an idiot just because I've taken a job that isn't as hands-on as the people I'm managing".
The engineering team nervously glanced at each other, until the team manager stood up and apologized: "Uhm, we meant this in good fun, and certainly didn't intend any" —
The manager cut him off: "Quiet — shut up! I'm talking to the robot, not you."
This was actually a blonde joke in the book; I thought I would update it to a group that is more a politically correct target for scorn....more
(Darn it, the SFPL copy is LIB USE ONLY, and I really don't wanna sit and the library reading it, nor do I want to buy it. Huh, I wonder if Interlibra(Darn it, the SFPL copy is LIB USE ONLY, and I really don't wanna sit and the library reading it, nor do I want to buy it. Huh, I wonder if Interlibrary Loan will work for books like that.)...more
Check out Politics, Odors and Soap by Nicholas Kristof, over at the New York Times. He writes a very enthusiastic little review of yet another bookCheck out Politics, Odors and Soap by Nicholas Kristof, over at the New York Times. He writes a very enthusiastic little review of yet another book on the intersection of cognition and politics. No big surprise, it's by Jonathan Haidt, who's doing the pioneering research into how the brains of liberals and conservatives are wired in fundamentally different ways. Oh, also see the review in the Wall St. Journal, Conflicting Moralities. The longer, "official" Ney York Times review is at Why Won’t They Listen?, and explores the book in more detail.
Okay, yeah, this has to go on the to-be-read shelf. And the over-stuffed cognition shelf. Hey, at least I was reading Kahneman before he won that NobeOkay, yeah, this has to go on the to-be-read shelf. And the over-stuffed cognition shelf. Hey, at least I was reading Kahneman before he won that Nobel Prize, before he got really popular. But I have to admit I never actually finished his Judgment under Uncertainty: Heuristics and Biases — it was due back at the library when I was only halfway through. That is a slow, engrossing grind of an academic tome, though.
All the reviews have been glowing. Kahneman is golden, of course — he's ascended into the pantheon of the intelligentsia's demigods. The first one I read was from The Economist, then there was the one from the New York Times, and then I caught the one in The Wilson Quarterly, but what finally made really this the zeitgeist was when I came across a review in the fluffy “Paper” magazine (I was sitting in a coffeeshop waiting for some friends to finish a boardgame and picked up a free copy. More pop culture snuck into my brain in that twenty minutes than I permitted during the balance of 2011.)
Well, I’ll probably never get around to reading this one. I’ve read quite a few PopCog books, and don’t see any immediate evidence that this one willWell, I’ll probably never get around to reading this one. I’ve read quite a few PopCog books, and don’t see any immediate evidence that this one will add anything fundamentally new. But it does seem like a good selection to point towards for someone new to the topic.
Quite a few months ago I learned the term “decision fatigue,” and then I noticed it in action a few days later. I play boardgames quite often, and preQuite a few months ago I learned the term “decision fatigue,” and then I noticed it in action a few days later. I play boardgames quite often, and prefer strategic games. I was in the middle of a tough game, playing in a coffee shop, and during a break I ordered a slice of cake for a snack. Which is strange, because I’m usually very, very good at not going for those sweet treats. It immediately occurred to me that this was an instance of this new-fangled cognate.
Even though I’ve read quite a few PopCog books, I haven’t hit one yet that details it, but as I understand, the idea is simply that the brain has a limited amount of activity to allocate between different tasks. If the highest priority is thinking hard about one’s next move in Hansa Tuetonica, then the subconscious motivation to avoid temptation will receive less activity, and one is more likely to indulge.
This is one of the many complexities that affects our “willpower,” a distinctly old-fashioned term that is getting some well-deserved scrutiny.
This new book, Willpower: Rediscovering the Greatest Human Strength, seems well conceived. It’s written by a top social psychologist, Roy F. Baumeister, along with New York Times science writer John Tierney. I’m a bit frustrated that I still haven’t gotten around to studying the previous Baumeister book on my to-be-read shelf, Evil: Inside Human Violence and Cruelty.