Manny’s review of Views into the Chinese Room: New Essays on Searle and Artificial Intelligence > Likes and Comments
34 likes · Like
I don't understand it but I liked the dialogue.
Rick, if you're just manipulating symbols too then that's no disgrace. Stand up and say you're proud of it!
Exactly. This is how language is learned. I wrote an article once on the Chinese Room where even the rule book is gone. And in a variant there are not even pictures. The user just gets Chinese characters representing Rock, Paper, Scissors. Or the notation of a Chinese Chess game. And the object is to learn the "correct" answer. Too bad I cannot share the link.
You realise after a while that getting a top 40 hit in the philosophy world is as random as getting one in the music world.
The Chinese Room appeared in 1980, and now I'm wondering what it would have been if it it had in fact been one of the Top Hot 100 for that year. My first thought is Escape (The Piña Colada Song). Somehow that's never gone away again either.
The kinds of corrections that the AI suggests could only occur if (1) the rules already exist for making those corrections or (2) the human decides to apply some creative interpretations.
Normativity is not the issue. Ultimately, the rulebook could not engage in self-analysis, because rulebooks aren't sentient.
I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate. Remember that we're hypothesising a human in the loop.
Of course, Searle maybe shouldn't have put a human in the loop. But he did. So we're discussing the scenario he actually described.
It is true that if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese. This is because they can reach outside the dictionary and extrapolate from their own understanding of the world - or rather, of the way humans think, experience, and discuss the world.
But unless they do so (unless they fail to blindly follow the rules), they would not understand Chinese.
Crucially, if a human has to accumulate experience in order to gain understanding, then they're part of a system that is neither "blind" nor exclusively "rule-following."
The only way for a human to "understand" another language is to gradually look beyond Google Translate (i.e., by learning the language). This is the point of the Chinese Room thought experiment.
if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese.
But the point is that if you leave an AI with a Chinese dictionary they could eventually figure out how to speak Chinese. I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.
And actually the entire rulebook angle is a strawman argument. Neither a child nor the AI needs a rulebook. AlphaZero figured out the rules of the Go game by itself. And when a child is shown a ball it does not compare it with a mental image of its thought-language. This is ridiculed by Wittgenstein at the beginning of the Philosophical Investigations but apparently without any huge effect. Although Chomsky managed to discredit the view what is going on is simple stimulus/response. When the child says ball the mother smiles when it says banana the mother frowns. The only thing inborn is that the child likes smiles more than frowns. No language instinct, no universal grammar.
Liedzeit wrote: I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.
Take a large number, convert it to binary, feed it to a printer. Out comes a painting. Are you going to look deep into the printer's eyes and claim that it is not an artist?
Take a large number, convert it to binary, feed it to a fax machine. Out comes a poem. Are you going to look deep into the fax machine's eyes (whatever that means) and claim it doesn't understand poetry?
The difference between a fax machine and ChatGPT is only one of scale, not kind.
I'm not saying that a digital mind cannot exist. I don't know for certain. I'm saying that if a digital mind could exist, the metaphysical implications would be strange. Is a handheld calculator conscious? What about simpler machines like toasters and chairs?
"AlphaZero figured out the rules of the Go game by itself."
The only exciting thing about that outcome is the sheer scale of processing power required. The actual mathematical mechanism for figuring out those rules is not that interesting (philosophically speaking - the math is beautiful).
Simeon wrote: "The difference between a fax machine and ChatGPT is only one of scale, not kind."
Fascinating!
"AlphaZero figured out the rules of the Go game by itself."
I meant MuZero. Compared to the achieved result (near God-like playing strength) the "sheer scale of processing power required" is totally neglectable. But you probably need to be a Go player to appreciate this.
"you probably need to be a Go player to appreciate this"
So, only Go players can study Monte Carlo Tree Search algorithms, which is the mathematics allowing computer programs to play Go at superhuman levels.
The thought keeps coming back to me that there's something vaguely racist about the Chinese Room thought experiment. If you replaced "Chinese" with, say, "French", many more people would be inclined to say it was ridiculous. Clearly the person in the room would soon find they'd picked up a great deal of French. But we're implicitly given to understand that Chinese is a completely alien language and we won't pick it up because, uh, well, I won't go into the details but it's obvious isn't it?
In actual fact, it's well established that all languages are related, and you can pick up any of them by persistently organising your experience. A shining example is Paul Eckert's book on the Australian Western Desert language Pitjantjatjara, Wangka Wiṟu . In the introduction, Eckert says he reached near native-level fluency in Pitjantjatjara just by listening to what people were saying and trying to imitate it.
To be as charitable to the thought experiment as possible, we could replace the Chinese or French symbols by scrambling them using a mathematical encryption so all you see is binary! You get 10101110 and you're told to type 10001010, with zero notion what any of it means and absolutely no way of figuring it out (because of the encryption) even if you wanted to, which you don't.
Now, do you (or the room) understand French? Dan Dennett bites the bullet and says yes. Regardless, it's important to remember that whether the person in the "French Room" eventually learns the language has nothing whatsoever to do with the thought experiment.
Simeon wrote: "To be as charitable to the thought experiment as possible..."
Interesting that you are using the word charitable. This is what we have to be in communication, which means assuming that the people we are talking to do mean something and intend us to understand their meaning. Perhaps we should cherish the writings of H. P. Grice more than those of Searle.
Encryption is a good point. With encryption you do not want the recipient to understand the message. And she needs a code book to be able to understand. (Although we have more than "zero" notion as codes can be broken.) This is actually a truer picture of what is happening in the Chinese (or French) Room. The person receives an encrypted message in Chinese, manages to decrypt it using the codebook, but is still clueless as to its meaning. In order to truly understand it, no rulebook is necessary, but rather communication in the Gricean sense, e.g. using pictures of cats, as in the example.
Also interesting, you are using a typical AI phrase "it’s important to remember that" when there is nothing to remember (unless in some platonic sense). 😉
@Manny: I do not think it a good idea to bring racism into this.
@Liedzeit I agree with everything you wrote! My phrasing was just a polite effort to underscore a red herring.
Well, I did say vaguely racist. But only to the extent that Chinese has apparently been chosen as the language because to many anglophones it seems alien and forbidding.
If you encrypt the messages into binary strings, then they become even more alien and forbidding. Discussing this yesterday with a friend who knows linguistics, she made the sensible point that, even in the original Chinese Room, it's unclear how the human would use the rule book if they genuinely didn't know any Chinese. To be able to look anything up, they'd need to understand enough about the language to decompose a character into radicals and search on the radicals. If the messages are encrypted, it's even worse, and surely the first step in the rule book is to decrypt the messages into Chinese.
In general, when you start picking at the details, as 5.2 does in its revised version, you see the whole thing is fundamentally incoherent. Rather than making it less clear, adding encryption, why not more clear, replacing Chinese with French? If the philosophical argument is sound that shouldn't make any difference.
And in reply to Simeon's "Now, do you (or the room) understand French? Dan Dennett bites the bullet and says yes. Regardless, it's important to remember that whether the person in the "French Room" eventually learns the language has nothing whatsoever to do with the thought experiment", I must say I don't understand this objection. Isn't Searle's claim precisely that the person doesn't learn the language? If they do learn the language and can understand what they are doing, as in 5.2's version, what exactly are we discussing?
Inside the French room, the following scenario plays out.
Step 1. A monitor displays the number 110010101.
Step 2. You look it up in an “instruction booklet.”
Step 3. The instructions tell you to type 101010.
Step 3. You type 101010.
This is where the human’s role begins and ends.
Outside of the machine, someone believes they’re interacting with a French speaker. Are they correct?
And that's the thought experiment.
No, nobody disputes that humans can learn other languages given enough time, effort, and data. Of course they could! The debate is over whether an “instructions booklet," which is merely a collection of conditional (if-this-then-that) statements can understand French. A digital computer is tantamount to such a “booklet.” ChatGPT, for instance, is just a collection of if-this-then-that statements. That’s all.
Simeon wrote: "Inside the French room, the following scenario plays out.
Step 1. A monitor displays the number 110010101.
Step 2. You look it up in an “instruction booklet.”
Step 3. The instructions tell you to type 101010.
Step 3. You type 101010.
This is where the human’s role begins and ends"
It seems to me that what you're saying here could be roughly paraphrased as "If the instructions are written in a way that makes them very difficult to understand, then people will find them very difficult to understand". I'm certainly not denying that if everything were encrypted into binary and everything were expanded out so that we had one rule for every possible input, then the rules will indeed be very difficult to understand. But the rule book would then be much larger than the universe, and our intuitions about such a system are probably not going to be reliable. My first thought is to wonder whether God could figure out how to understand Chinese from looking at it. Maybe there's a short story waiting to be written here?
Manny wrote: "Maybe there's a short story waiting to be written here?
ChatGPT-5.2 was up to the challenge. See the end of the main review.
But the rules are actually super easy to understand. You get a number, you look it up, you type some other number.
Correct! This is actually part of the thought experiment. If the rulebook would need to be absurdly large, then what brains do probably isn't reducible to a rulebook! For instance, ChatGPT 5 was fed something like 300,000 million words in order to mimic speech. That's many orders of magnitude more than a child needs to learn a language.
Well, the formal structure of the rules is indeed trivial, it's just one huge if-then-else statement. But it's been intentionally chosen so that you can't understand why the rules work.
A software engineer who wrote an app this way and then explained to their manager that the code was super easy to understand would be using a cute way to say they wanted to be fired. Though they'd love to tell the story to their friends when they were a bit drunk.
The actual instructions that the human inside the room has to follow are incredibly simple and straightforward. As to creating those instructions or reproducing them, not so much. But again that’s irrelevant to the thought experiment.
Yes, we agree that the formal structure of the rule book is trivially simple. That's clear. But it's less clear what we can deduce from the fact that encoding the inputs and outputs in a complicated way makes it hard for the person inside the room to understand what is going on.
If we want to, we can make the encoding arbitrarily difficult to break, and it will take the person in the room an arbitrarily large amount of time to figure out the regularities in the rules and learn to understand what they're doing. But given eternity to do so, as in 5.2's second story, they'll still succeed some time in the distant future. In the worst case, they can just work through all possible interpretations of the gigantic rule book and select the best one.
The basic problem, as many people have said, is that the scenario is so far-fetched that our intuitions just aren't very reliable. E.g. Russell and Norvig offer this criticism in the relevant section of "Artificial Intelligence: A Modern Approach". But I find 5.2's version funnier, and the fact that it's been written by an AI adds extra piquancy.
I reread Searle’s original article (or rather chapter) Can Computers think? yesterday and was surprised how really bad it is. You expect from a philosopher that he at least would get his categories not mixed up. But he actually uses the “argument" that when a computer does a simulation of rain storms you do not get wet. So when it simulates speech you do not get meaning. And I suppose when a computer checkmates your king you are not really beaten because it only simulates chess playing.
But the really bad thing is that Searle does not treat the problem as an open question at all. He defines a digital computer as something that is only capable of syntax and states that syntax is not sufficient for semantics. He calls this a logical truth. And this he repeats in different words at least five or six times. The whole Chinese Room metaphor is not an argument but only an illustration of the logical truth that computers can not think.
And he thinks that no progress in computer science will ever change this. “The nature of the refutation is completely independent of any state of technology“.
On the other hand he admits that machines can think, we humans are machines that think, e.g. So rebuilding a brain molecule by molecule would create an artificial thinking machine. But surely just substituting one single carbon atom by a silicon atom would not make the whole device incapable of thinking? What about Data in Star Trek? Can he think?
If (we find out that) Martians can think, he says, and in their head there is only green slime, it is the green slime that has the "causal power" to produce meaning.
There is not any sense in arguing against Searle then. Unless you doubt his premises. Even granted that his argument was valid in 1984 (which I do not think it is) computers or rather LLMs are completely different today. Because LLMs are all about semantics. Just play a couple of games on semantle.com to get a feeling of how the semantics, the meaning, can get defined (and thus "grasped") by the similarity to other words. It is syntax that is a by product of semantics these days. ChatGPT does not have a mental image of a house and even less an emotional connection to the house it lives in. But it does know its meaning. It knows when to use it and when using villa or hut or dwelling or home is more appropriate. And I think it is obvious that it has the causal power to produce new meanings. (And contrary to Chomsky it does not have to have access to any "Universal Grammar".)
@Manny: I think that French is just too close to English. We could use Basque instead. But I fail to see how that would be less racist than Chinese.
Liedzeit wrote: "I reread Searle’s original article (or rather chapter) Can Computers think? yesterday and was surprised how really bad it is. You expect from a philosopher that he at least would get his categories..."
Yes, Searle's whole argument is just nonsense - and now that we have LLMs who can write witty refutations like the ones here, it should be obvious to everyone that it's nonsense. Of course 5.2 understands and thinks when it discusses these issues. In fact, it clearly understands and thinks better than all but a small minority of humans do.
Sceptics are of course free to deny the evidence of their own eyes, but there is never any real answer to strong enough denialism. A white man with sufficiently strong racist or sexist views will never agree that a Black person or a woman can think as well as they can, irrespective of what evidence is brought to the table. Uncomfortable as it is for the large number of speciesists making similar claims about AIs, I think we've reached the point where the burden of proof is on them to explain why they are essentially different from the racists and sexists.
With regard to "French Room" versus "Chinese Room": yes, French is perhaps the language closest to English, so we instinctively wonder if it's a fair test. But where are we going to draw the line? Is "Danish Room" far enough away? How about "Russian Room"? Or "Persian Room"? Or "Finnish Room"? Why should we draw the line in one place rather than another? What grounds do we have for believing that the person in the room is obviously going to succeed when they are on one side of the line, and fail when they are on the other side? Won't it depend on how good they are at learning languages from context? Some people can't do it all; others are very gifted.
The more you examine the details of the Chinese Room, the more it looks like a trick. Which could be said of a great many famous thought-experiments; in fact, I'd love to read a book called Fifty Famous Thought-Experiments, And What's Wrong With Them. Maybe an AI will write it for us :)
back to top
date
newest »
newest »
message 1:
by
Rick
(new)
Jan 03, 2026 03:12AM
I don't understand it but I liked the dialogue.
reply
|
flag
Rick, if you're just manipulating symbols too then that's no disgrace. Stand up and say you're proud of it!
Exactly. This is how language is learned. I wrote an article once on the Chinese Room where even the rule book is gone. And in a variant there are not even pictures. The user just gets Chinese characters representing Rock, Paper, Scissors. Or the notation of a Chinese Chess game. And the object is to learn the "correct" answer. Too bad I cannot share the link.
You realise after a while that getting a top 40 hit in the philosophy world is as random as getting one in the music world.
The Chinese Room appeared in 1980, and now I'm wondering what it would have been if it it had in fact been one of the Top Hot 100 for that year. My first thought is Escape (The Piña Colada Song). Somehow that's never gone away again either.
Human: That’s still just rule-following!
AI: When they invent a new entry because the rulebook is missing a page?
Human: Extended rule-following!
AI: When they correct a typo in the rulebook because it would give the wrong answer for a rabbit?
Human: Very fast rule-following.
The kinds of corrections that the AI suggests could only occur if (1) the rules already exist for making those corrections or (2) the human decides to apply some creative interpretations.
Normativity is not the issue. Ultimately, the rulebook could not engage in self-analysis, because rulebooks aren't sentient.
I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate. Remember that we're hypothesising a human in the loop. Of course, Searle maybe shouldn't have put a human in the loop. But he did. So we're discussing the scenario he actually described.
It is true that if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese. This is because they can reach outside the dictionary and extrapolate from their own understanding of the world - or rather, of the way humans think, experience, and discuss the world.But unless they do so (unless they fail to blindly follow the rules), they would not understand Chinese.
I think 5.2's point is that blind rule-following naturally turns into understanding, once some experience has had time to accumulate
Crucially, if a human has to accumulate experience in order to gain understanding, then they're part of a system that is neither "blind" nor exclusively "rule-following."
The only way for a human to "understand" another language is to gradually look beyond Google Translate (i.e., by learning the language). This is the point of the Chinese Room thought experiment.
if you left a human with a Chinese dictionary they could eventually figure out how to speak Chinese. But the point is that if you leave an AI with a Chinese dictionary they could eventually figure out how to speak Chinese. I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying.
And actually the entire rulebook angle is a strawman argument. Neither a child nor the AI needs a rulebook. AlphaZero figured out the rules of the Go game by itself. And when a child is shown a ball it does not compare it with a mental image of its thought-language. This is ridiculed by Wittgenstein at the beginning of the Philosophical Investigations but apparently without any huge effect. Although Chomsky managed to discredit the view what is going on is simple stimulus/response. When the child says ball the mother smiles when it says banana the mother frowns. The only thing inborn is that the child likes smiles more than frowns. No language instinct, no universal grammar.
Liedzeit wrote: I think you need a lot of Chutzpah to look straight into the AI’s eyes and deny that it is understanding what you are saying. Take a large number, convert it to binary, feed it to a printer. Out comes a painting. Are you going to look deep into the printer's eyes and claim that it is not an artist?
Take a large number, convert it to binary, feed it to a fax machine. Out comes a poem. Are you going to look deep into the fax machine's eyes (whatever that means) and claim it doesn't understand poetry?
The difference between a fax machine and ChatGPT is only one of scale, not kind.
I'm not saying that a digital mind cannot exist. I don't know for certain. I'm saying that if a digital mind could exist, the metaphysical implications would be strange. Is a handheld calculator conscious? What about simpler machines like toasters and chairs?
"AlphaZero figured out the rules of the Go game by itself."
The only exciting thing about that outcome is the sheer scale of processing power required. The actual mathematical mechanism for figuring out those rules is not that interesting (philosophically speaking - the math is beautiful).
Simeon wrote: "The difference between a fax machine and ChatGPT is only one of scale, not kind."Fascinating!
"AlphaZero figured out the rules of the Go game by itself."
I meant MuZero. Compared to the achieved result (near God-like playing strength) the "sheer scale of processing power required" is totally neglectable. But you probably need to be a Go player to appreciate this.
"you probably need to be a Go player to appreciate this"So, only Go players can study Monte Carlo Tree Search algorithms, which is the mathematics allowing computer programs to play Go at superhuman levels.
The thought keeps coming back to me that there's something vaguely racist about the Chinese Room thought experiment. If you replaced "Chinese" with, say, "French", many more people would be inclined to say it was ridiculous. Clearly the person in the room would soon find they'd picked up a great deal of French. But we're implicitly given to understand that Chinese is a completely alien language and we won't pick it up because, uh, well, I won't go into the details but it's obvious isn't it?In actual fact, it's well established that all languages are related, and you can pick up any of them by persistently organising your experience. A shining example is Paul Eckert's book on the Australian Western Desert language Pitjantjatjara, Wangka Wiṟu . In the introduction, Eckert says he reached near native-level fluency in Pitjantjatjara just by listening to what people were saying and trying to imitate it.
To be as charitable to the thought experiment as possible, we could replace the Chinese or French symbols by scrambling them using a mathematical encryption so all you see is binary! You get 10101110 and you're told to type 10001010, with zero notion what any of it means and absolutely no way of figuring it out (because of the encryption) even if you wanted to, which you don't.Now, do you (or the room) understand French? Dan Dennett bites the bullet and says yes. Regardless, it's important to remember that whether the person in the "French Room" eventually learns the language has nothing whatsoever to do with the thought experiment.
Simeon wrote: "To be as charitable to the thought experiment as possible..."Interesting that you are using the word charitable. This is what we have to be in communication, which means assuming that the people we are talking to do mean something and intend us to understand their meaning. Perhaps we should cherish the writings of H. P. Grice more than those of Searle.
Encryption is a good point. With encryption you do not want the recipient to understand the message. And she needs a code book to be able to understand. (Although we have more than "zero" notion as codes can be broken.) This is actually a truer picture of what is happening in the Chinese (or French) Room. The person receives an encrypted message in Chinese, manages to decrypt it using the codebook, but is still clueless as to its meaning. In order to truly understand it, no rulebook is necessary, but rather communication in the Gricean sense, e.g. using pictures of cats, as in the example.
Also interesting, you are using a typical AI phrase "it’s important to remember that" when there is nothing to remember (unless in some platonic sense). 😉
@Manny: I do not think it a good idea to bring racism into this.
@Liedzeit I agree with everything you wrote! My phrasing was just a polite effort to underscore a red herring.
Well, I did say vaguely racist. But only to the extent that Chinese has apparently been chosen as the language because to many anglophones it seems alien and forbidding.If you encrypt the messages into binary strings, then they become even more alien and forbidding. Discussing this yesterday with a friend who knows linguistics, she made the sensible point that, even in the original Chinese Room, it's unclear how the human would use the rule book if they genuinely didn't know any Chinese. To be able to look anything up, they'd need to understand enough about the language to decompose a character into radicals and search on the radicals. If the messages are encrypted, it's even worse, and surely the first step in the rule book is to decrypt the messages into Chinese.
In general, when you start picking at the details, as 5.2 does in its revised version, you see the whole thing is fundamentally incoherent. Rather than making it less clear, adding encryption, why not more clear, replacing Chinese with French? If the philosophical argument is sound that shouldn't make any difference.
And in reply to Simeon's "Now, do you (or the room) understand French? Dan Dennett bites the bullet and says yes. Regardless, it's important to remember that whether the person in the "French Room" eventually learns the language has nothing whatsoever to do with the thought experiment", I must say I don't understand this objection. Isn't Searle's claim precisely that the person doesn't learn the language? If they do learn the language and can understand what they are doing, as in 5.2's version, what exactly are we discussing?
Inside the French room, the following scenario plays out.Step 1. A monitor displays the number 110010101.
Step 2. You look it up in an “instruction booklet.”
Step 3. The instructions tell you to type 101010.
Step 3. You type 101010.
This is where the human’s role begins and ends.
Outside of the machine, someone believes they’re interacting with a French speaker. Are they correct?
And that's the thought experiment.
Isn't Searle's claim precisely that the person doesn't learn the language?
No, nobody disputes that humans can learn other languages given enough time, effort, and data. Of course they could! The debate is over whether an “instructions booklet," which is merely a collection of conditional (if-this-then-that) statements can understand French. A digital computer is tantamount to such a “booklet.” ChatGPT, for instance, is just a collection of if-this-then-that statements. That’s all.
Simeon wrote: "Inside the French room, the following scenario plays out.Step 1. A monitor displays the number 110010101.
Step 2. You look it up in an “instruction booklet.”
Step 3. The instructions tell you to type 101010.
Step 3. You type 101010.
This is where the human’s role begins and ends"
It seems to me that what you're saying here could be roughly paraphrased as "If the instructions are written in a way that makes them very difficult to understand, then people will find them very difficult to understand". I'm certainly not denying that if everything were encrypted into binary and everything were expanded out so that we had one rule for every possible input, then the rules will indeed be very difficult to understand. But the rule book would then be much larger than the universe, and our intuitions about such a system are probably not going to be reliable. My first thought is to wonder whether God could figure out how to understand Chinese from looking at it. Maybe there's a short story waiting to be written here?
Manny wrote: "Maybe there's a short story waiting to be written here?ChatGPT-5.2 was up to the challenge. See the end of the main review.
the rules will indeed be very difficult to understand
But the rules are actually super easy to understand. You get a number, you look it up, you type some other number.
the rule book would then be much larger than the universe
Correct! This is actually part of the thought experiment. If the rulebook would need to be absurdly large, then what brains do probably isn't reducible to a rulebook! For instance, ChatGPT 5 was fed something like 300,000 million words in order to mimic speech. That's many orders of magnitude more than a child needs to learn a language.
Well, the formal structure of the rules is indeed trivial, it's just one huge if-then-else statement. But it's been intentionally chosen so that you can't understand why the rules work. A software engineer who wrote an app this way and then explained to their manager that the code was super easy to understand would be using a cute way to say they wanted to be fired. Though they'd love to tell the story to their friends when they were a bit drunk.
The actual instructions that the human inside the room has to follow are incredibly simple and straightforward. As to creating those instructions or reproducing them, not so much. But again that’s irrelevant to the thought experiment.
Yes, we agree that the formal structure of the rule book is trivially simple. That's clear. But it's less clear what we can deduce from the fact that encoding the inputs and outputs in a complicated way makes it hard for the person inside the room to understand what is going on.If we want to, we can make the encoding arbitrarily difficult to break, and it will take the person in the room an arbitrarily large amount of time to figure out the regularities in the rules and learn to understand what they're doing. But given eternity to do so, as in 5.2's second story, they'll still succeed some time in the distant future. In the worst case, they can just work through all possible interpretations of the gigantic rule book and select the best one.
The basic problem, as many people have said, is that the scenario is so far-fetched that our intuitions just aren't very reliable. E.g. Russell and Norvig offer this criticism in the relevant section of "Artificial Intelligence: A Modern Approach". But I find 5.2's version funnier, and the fact that it's been written by an AI adds extra piquancy.
I reread Searle’s original article (or rather chapter) Can Computers think? yesterday and was surprised how really bad it is. You expect from a philosopher that he at least would get his categories not mixed up. But he actually uses the “argument" that when a computer does a simulation of rain storms you do not get wet. So when it simulates speech you do not get meaning. And I suppose when a computer checkmates your king you are not really beaten because it only simulates chess playing.But the really bad thing is that Searle does not treat the problem as an open question at all. He defines a digital computer as something that is only capable of syntax and states that syntax is not sufficient for semantics. He calls this a logical truth. And this he repeats in different words at least five or six times. The whole Chinese Room metaphor is not an argument but only an illustration of the logical truth that computers can not think.
And he thinks that no progress in computer science will ever change this. “The nature of the refutation is completely independent of any state of technology“.
On the other hand he admits that machines can think, we humans are machines that think, e.g. So rebuilding a brain molecule by molecule would create an artificial thinking machine. But surely just substituting one single carbon atom by a silicon atom would not make the whole device incapable of thinking? What about Data in Star Trek? Can he think?
If (we find out that) Martians can think, he says, and in their head there is only green slime, it is the green slime that has the "causal power" to produce meaning.
There is not any sense in arguing against Searle then. Unless you doubt his premises. Even granted that his argument was valid in 1984 (which I do not think it is) computers or rather LLMs are completely different today. Because LLMs are all about semantics. Just play a couple of games on semantle.com to get a feeling of how the semantics, the meaning, can get defined (and thus "grasped") by the similarity to other words. It is syntax that is a by product of semantics these days. ChatGPT does not have a mental image of a house and even less an emotional connection to the house it lives in. But it does know its meaning. It knows when to use it and when using villa or hut or dwelling or home is more appropriate. And I think it is obvious that it has the causal power to produce new meanings. (And contrary to Chomsky it does not have to have access to any "Universal Grammar".)
@Manny: I think that French is just too close to English. We could use Basque instead. But I fail to see how that would be less racist than Chinese.
Liedzeit wrote: "I reread Searle’s original article (or rather chapter) Can Computers think? yesterday and was surprised how really bad it is. You expect from a philosopher that he at least would get his categories..."Yes, Searle's whole argument is just nonsense - and now that we have LLMs who can write witty refutations like the ones here, it should be obvious to everyone that it's nonsense. Of course 5.2 understands and thinks when it discusses these issues. In fact, it clearly understands and thinks better than all but a small minority of humans do.
Sceptics are of course free to deny the evidence of their own eyes, but there is never any real answer to strong enough denialism. A white man with sufficiently strong racist or sexist views will never agree that a Black person or a woman can think as well as they can, irrespective of what evidence is brought to the table. Uncomfortable as it is for the large number of speciesists making similar claims about AIs, I think we've reached the point where the burden of proof is on them to explain why they are essentially different from the racists and sexists.
With regard to "French Room" versus "Chinese Room": yes, French is perhaps the language closest to English, so we instinctively wonder if it's a fair test. But where are we going to draw the line? Is "Danish Room" far enough away? How about "Russian Room"? Or "Persian Room"? Or "Finnish Room"? Why should we draw the line in one place rather than another? What grounds do we have for believing that the person in the room is obviously going to succeed when they are on one side of the line, and fail when they are on the other side? Won't it depend on how good they are at learning languages from context? Some people can't do it all; others are very gifted.
The more you examine the details of the Chinese Room, the more it looks like a trick. Which could be said of a great many famous thought-experiments; in fact, I'd love to read a book called Fifty Famous Thought-Experiments, And What's Wrong With Them. Maybe an AI will write it for us :)

