Manny’s review of Views into the Chinese Room: New Essays on Searle and Artificial Intelligence > Likes and Comments

20 likes · 
Comments Showing 1-17 of 17 (17 new)    post a comment »
dateUp arrow    newest »

message 1: by Rick (new)

Rick Slane I don't understand it but I liked the dialogue.


message 2: by Manny (new)

Manny Rick, if you're just manipulating symbols too then that's no disgrace. Stand up and say you're proud of it!


message 3: by Liedzeit (new)

Liedzeit Liedzeit Exactly. This is how language is learned. I wrote an article once on the Chinese Room where even the rule book is gone. And in a variant there are not even pictures. The user just gets Chinese characters representing Rock, Paper, Scissors. Or the notation of a Chinese Chess game. And the object is to learn the "correct" answer. Too bad I cannot share the link.


message 4: by Manny (new)

Manny You realise after a while that getting a top 40 hit in the philosophy world is as random as getting one in the music world.


message 5: by Manny (new)

Manny The Chinese Room appeared in 1980, and now I'm wondering what it would have been if it it had in fact been one of the Top Hot 100 for that year. My first thought is Escape (The Piña Colada Song). Somehow that's never gone away again either.


message 6: by Simeon (last edited Jan 03, 2026 12:13PM) (new)

Simeon


Human: That’s still just rule-following!

AI: When they invent a new entry because the rulebook is missing a page?

Human: Extended rule-following!

AI: When they correct a typo in the rulebook because it would give the wrong answer for a rabbit?

Human: Very fast rule-following.



The kinds of corrections that the AI suggests could only occur if (1) the rules already exist for making those corrections or (2) the human decides to apply some creative interpretations.

Normativity is not the issue. Ultimately, the rulebook could not engage in self-analysis, because rulebooks aren't sentient.


message 7: by Manny (new)

Manny I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate. Remember that we're hypothesising a human in the loop.

Of course, Searle maybe shouldn't have put a human in the loop. But he did. So we're discussing the scenario he actually described.


message 8: by Simeon (last edited Jan 05, 2026 05:14PM) (new)

Simeon It is true that if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese. This is because they can reach outside the dictionary and extrapolate from their own understanding of the world - or rather, of the way humans think, experience, and discuss the world.

But unless they do so (unless they fail to blindly follow the rules), they would not understand Chinese.

I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate


Crucially, if a human has to accumulate experience in order to gain understanding, then they're part of a system that is neither "blind" nor exclusively "rule-following."

The only way for a human to "understand" another language is to gradually look beyond Google Translate (i.e., by learning the language). This is the point of the Chinese Room thought experiment.


message 9: by Liedzeit (last edited Jan 05, 2026 02:03AM) (new)

Liedzeit Liedzeit if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese.

But the point is that if you leave an AI with a Chinese dictionary they could eventually figure out how to speak Chinese. I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.
And actually the entire rulebook angle is a strawman argument. Neither a child nor the AI needs a rulebook. AlphaZero figured out the rules of the Go game by itself. And when a child is shown a ball it does not compare it with a mental image of its thought-language. This is ridiculed by Wittgenstein at the beginning of the Philosophical Investigations but apparently without any huge effect. Although Chomsky managed to discredit the view what is going on is simple stimulus/response. When the child says ball the mother smiles when it says banana the mother frowns. The only thing inborn is that the child likes smiles more than frowns. No language instinct, no universal grammar.


message 10: by Simeon (last edited Jan 05, 2026 05:16PM) (new)

Simeon Liedzeit wrote: I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.

Take a large number, convert it to binary, feed it to a printer. Out comes a painting. Are you going to look deep into the printer's eyes and claim that it is not an artist?

Take a large number, convert it to binary, feed it to a fax machine. Out comes a poem. Are you going to look deep into the fax machine's eyes (whatever that means) and claim it doesn't understand poetry?

The difference between a fax machine and ChatGPT is only one of scale, not kind.

I'm not saying that a digital mind cannot exist. I don't know for certain. I'm saying that if a digital mind could exist, the metaphysical implications would be strange. Is a handheld calculator conscious? What about simpler machines like toasters and chairs?

"AlphaZero figured out the rules of the Go game by itself."

The only exciting thing about that outcome is the sheer scale of processing power required. The actual mathematical mechanism for figuring out those rules is not that interesting (philosophically speaking - the math is beautiful).


message 11: by Liedzeit (new)

Liedzeit Liedzeit Simeon wrote: "The difference between a fax machine and ChatGPT is only one of scale, not kind."

Fascinating!

"AlphaZero figured out the rules of the Go game by itself."

I meant MuZero. Compared to the achieved result (near God-like playing strength) the "sheer scale of processing power required" is totally neglectable. But you probably need to be a Go player to appreciate this.


message 12: by Simeon (last edited Jan 05, 2026 05:05PM) (new)

Simeon "you probably need to be a Go player to appreciate this"

So, only Go players can study Monte Carlo Tree Search algorithms, which is the mathematics allowing computer programs to play Go at superhuman levels.


message 13: by Manny (last edited Jan 07, 2026 02:39PM) (new)

Manny The thought keeps coming back to me that there's something vaguely racist about the Chinese Room thought experiment. If you replaced "Chinese" with, say, "French", many more people would be inclined to say it was ridiculous. Clearly the person in the room would soon find they'd picked up a great deal of French. But we're implicitly given to understand that Chinese is a completely alien language and we won't pick it up because, uh, well, I won't go into the details but it's obvious isn't it?

In actual fact, it's well established that all languages are related, and you can pick up any of them by persistently organising your experience. A shining example is Paul Eckert's book on the Australian Western Desert language Pitjantjatjara, Wangka Wiṟu . In the introduction, Eckert says he reached near native-level fluency in Pitjantjatjara just by listening to what people were saying and trying to imitate it.


message 14: by Mommalibrarian (new)

Mommalibrarian why do humans like smiles more than frowns? Does AI have likes and dislikes?


message 15: by Simeon (last edited Jan 07, 2026 06:40PM) (new)

Simeon To be as charitable to the thought experiment as possible, we could replace the Chinese or French symbols by scrambling them using a mathematical encryption so all you see is binary! You get 10101110 and you're told to type 10001010, with zero notion what any of it means and absolutely no way of figuring it out (because of the encryption) even if you wanted to, which you don't.

Now, do you (or the room) understand French? Dan Dennett bites the bullet and says yes. Regardless, it's important to remember that whether the person in the "French Room" eventually learns the language has nothing whatsoever to do with the thought experiment.


message 16: by Liedzeit (new)

Liedzeit Liedzeit Simeon wrote: "To be as charitable to the thought experiment as possible..."
Interesting that you are using the word charitable. This is what we have to be in communication, which means assuming that the people we are talking to do mean something and intend us to understand their meaning. Perhaps we should cherish the writings of H. P. Grice more than those of Searle.

Encryption is a good point. With encryption you do not want the recipient to understand the message. And she needs a code book to be able to understand. (Although we have more than "zero" notion as codes can be broken.) This is actually a truer picture of what is happening in the Chinese (or French) Room. The person receives an encrypted message in Chinese, manages to decrypt it using the codebook, but is still clueless as to its meaning. In order to truly understand it, no rulebook is necessary, but rather communication in the Gricean sense, e.g. using pictures of cats, as in the example.

Also interesting, you are using a typical AI phrase "it’s important to remember that" when there is nothing to remember (unless in some platonic sense). 😉

@Manny: I do not think it a good idea to bring racism into this.


message 17: by Simeon (last edited 9 hours, 35 min ago) (new)

Simeon @Liedzeit I agree with everything you wrote! My phrasing was just a polite effort to underscore a red herring.


back to top