Manny’s review of Views into the Chinese Room: New Essays on Searle and Artificial Intelligence > Likes and Comments

15 likes · 
Comments Showing 1-12 of 12 (12 new)    post a comment »
dateUp arrow    newest »

message 1: by Rick (new)

Rick Slane I don't understand it but I liked the dialogue.


message 2: by Manny (new)

Manny Rick, if you're just manipulating symbols too then that's no disgrace. Stand up and say you're proud of it!


message 3: by Liedzeit (new)

Liedzeit Liedzeit Exactly. This is how language is learned. I wrote an article once on the Chinese Room where even the rule book is gone. And in a variant there are not even pictures. The user just gets Chinese characters representing Rock, Paper, Scissors. Or the notation of a Chinese Chess game. And the object is to learn the "correct" answer. Too bad I cannot share the link.


message 4: by Manny (new)

Manny You realise after a while that getting a top 40 hit in the philosophy world is as random as getting one in the music world.


message 5: by Manny (new)

Manny The Chinese Room appeared in 1980, and now I'm wondering what it would have been if it it had in fact been one of the Top Hot 100 for that year. My first thought is Escape (The Piña Colada Song). Somehow that's never gone away again either.


message 6: by Simeon (last edited Jan 03, 2026 12:13PM) (new)

Simeon


Human: That’s still just rule-following!

AI: When they invent a new entry because the rulebook is missing a page?

Human: Extended rule-following!

AI: When they correct a typo in the rulebook because it would give the wrong answer for a rabbit?

Human: Very fast rule-following.



The kinds of corrections that the AI suggests could only occur if (1) the rules already exist for making those corrections or (2) the human decides to apply some creative interpretations.

Normativity is not the issue. Ultimately, the rulebook could not engage in self-analysis, because rulebooks aren't sentient.


message 7: by Manny (new)

Manny I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate. Remember that we're hypothesising a human in the loop.

Of course, Searle maybe shouldn't have put a human in the loop. But he did. So we're discussing the scenario he actually described.


message 8: by Simeon (last edited Jan 05, 2026 05:14PM) (new)

Simeon It is true that if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese. This is because they can reach outside the dictionary and extrapolate from their own understanding of the world - or rather, of the way humans think, experience, and discuss the world.

But unless they do so (unless they fail to blindly follow the rules), they would not understand Chinese.

I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate


Crucially, if a human has to accumulate experience in order to gain understanding, then they're part of a system that is neither "blind" nor exclusively "rule-following."

The only way for a human to "understand" another language is to gradually look beyond Google Translate (i.e., by learning the language). This is the point of the Chinese Room thought experiment.


message 9: by Liedzeit (last edited Jan 05, 2026 02:03AM) (new)

Liedzeit Liedzeit if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese.

But the point is that if you leave an AI with a Chinese dictionary they could eventually figure out how to speak Chinese. I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.
And actually the entire rulebook angle is a strawman argument. Neither a child nor the AI needs a rulebook. AlphaZero figured out the rules of the Go game by itself. And when a child is shown a ball it does not compare it with a mental image of its thought-language. This is ridiculed by Wittgenstein at the beginning of the Philosophical Investigations but apparently without any huge effect. Although Chomsky managed to discredit the view what is going on is simple stimulus/response. When the child says ball the mother smiles when it says banana the mother frowns. The only thing inborn is that the child likes smiles more than frowns. No language instinct, no universal grammar.


message 10: by Simeon (last edited Jan 05, 2026 05:16PM) (new)

Simeon Liedzeit wrote: I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.

Take a large number, convert it to binary, feed it to a printer. Out comes a painting. Are you going to look deep into the printer's eyes and claim that it is not an artist?

Take a large number, convert it to binary, feed it to a fax machine. Out comes a poem. Are you going to look deep into the fax machine's eyes (whatever that means) and claim it doesn't understand poetry?

The difference between a fax machine and ChatGPT is only one of scale, not kind.

I'm not saying that a digital mind cannot exist. I don't know for certain. I'm saying that if a digital mind could exist, the metaphysical implications would be strange. Is a handheld calculator conscious? What about simpler machines like toasters and chairs?

"AlphaZero figured out the rules of the Go game by itself."

The only exciting thing about that outcome is the sheer scale of processing power required. The actual mathematical mechanism for figuring out those rules is not that interesting (philosophically speaking - the math is beautiful).


message 11: by Liedzeit (new)

Liedzeit Liedzeit Simeon wrote: "The difference between a fax machine and ChatGPT is only one of scale, not kind."

Fascinating!

"AlphaZero figured out the rules of the Go game by itself."

I meant MuZero. Compared to the achieved result (near God-like playing strength) the "sheer scale of processing power required" is totally neglectable. But you probably need to be a Go player to appreciate this.


message 12: by Simeon (last edited Jan 05, 2026 05:05PM) (new)

Simeon "you probably need to be a Go player to appreciate this"

So, only Go players can study Monte Carlo Tree Search algorithms, which is the mathematics allowing computer programs to play Go at superhuman levels.


back to top