top of page
Search

Challenging the Chinese Room Argument

  • Writer: Pranav Jain
    Pranav Jain
  • Aug 6, 2020
  • 4 min read

Updated: Aug 6, 2020

Having an interest for both medicine and computer science, I often have pondered about the possibility of computers developing human-level intelligence. For most of my life, this idea has remained relatively intangible, as I had no starting point from which I could make conclusions. That was until I learned about the Chinese Room argument the summer before my freshman year at Texas A&M. This video by BBC studios gives a good description of John Searle's argument:

Searle, in his paper Minds, Brains, and Programs, describes the intention behind his argument below:


In short, the Chinese Room is Searle's argument against machines understanding anything. John Searle's Chinese Room demonstrates that a human, who is capable of understanding, cannot understand in the framework of a computer. Consequently, if a human in the framework of a computer cannot understand, how can a computer?


With his argument, John Searle provided me with an important foundation from which to begin making conclusions about the possibility of computers developing human-level intelligence. Slightly disappointed at the notion that computers would never be able to understand, I decided to try to develop a Chinese Room reply that even Searle might himself accept.


I began by considering Searle's application of the Chinese Room against the Robot Reply, discussed in Minds, Brains, and Programs. Searle states:

To make this modified Chinese Room more tangible, I decided to model it in this way:

This diagram represents the following. A Chinese speaker decides to use a symbol to convey some information state. This symbol can be essentially any mode a Chinese speaker could use to communicate: written Chinese characters, spoken Chinese, a photograph, an audio recording, etc. This symbol is then converted into some unfamiliar syntax and then sent to the non-Chinese speaker within the modified Chinese Room. Lastly, the non-Chinese speaker uses their instruction manual, which contains no translations, to output some response.


For example, the Chinese speaker may use the symbol "Picture 1" to convey the information state "Semantic Data 4". The symbol "Picture 1" is then converted to "Syntax 4", before being sent to the non-Chinese speaker. The non-Chinese speaker then uses their instruction manual to output "Response E".


I agree with Searle that in this scenario, the non-Chinese speaker will not develop any understanding of the semantic relationships underlying the symbols. However, this is because I make an arbitrary, but intuitive assumption.


I assume that if the non-Chinese speaker's rule book guides them to output the same response for 2 different syntax inputs, they will recognize that the 2 inputs must be related in some way. For example, if they receive either "Syntax 3" or "Syntax 7", they will output "Response D". Consequently, in their mind "Syntax 3" and "Syntax 7" are related.


Because the rule book's connections are not correlated with the connections in the Chinese speaker's mind, I agree with Searle that, at least in the modified Chinese Room, there is no way for the non-Chinese speaker to know the semantic relationships of the symbols.


While looking at the above diagram, I realized that one important change could be made to the non-Chinese speaker's rule book which could allow them to know the semantic relationships of the symbols. I imagined changing the connections on the non-Chinese speaker's rule book such that they mirrored the connections within the Chinese speaker's mind. This new diagram is pictured below:

With this new rule book, the non-Chinese speaker would indirectly know which symbols are related to one another. For example, "Picture 1" and "Picture 2" are semantically related in the Chinese speaker's mind, as they both convey the information state "Semantic Data 4". These symbols are converted to "Syntax 4" and "Syntax 5" before being sent to the non-Chinese speaker. The non-Chinese speaker knows that these 2 syntax inputs are related because their rule book guides them to output the same action ("Response 4") for each.


There are two potential problems with the idea I have discussed above. Firstly, although the non-Chinese speaker perceives "Syntax 4" and "Syntax 5", they do not perceive "Picture 1" and "Picture 2". Secondly, although the non-Chinese speaker knows that "Syntax 4" and "Syntax 5" are related, they do not know that the reason that they are related is because "Picture 1" and "Picture 2" are semantically related.


In regards to the first potential problem, this can be resolved by using the inverted spectrum argument. The first 3 minutes of this video by Vsauce provides a great description of this position:

In short, the inverted spectrum position argues that 2 individuals who can perceive the same color relationships could be having vastly different experiences. The reason I bring up this position is to demonstrate that just because the non-Chinese speaker has a different experience of "Picture 1" and "Picture 2" (as "Syntax 1" and "Syntax 2") does not mean that they do not know that the symbols are related. The below diagram clearly shows the similarities between the Chinese Room and the inverted spectrum argument:

In regards to the second potential problem, this can be partly resolved by applying the inverted spectrum argument. You can apply the argument, as shown above, to demonstrate that if the non-Chinese speaker knows that "Syntax 1" and "Syntax 2" are related, they know that "Picture 1" and "Picture 2" are related. The problem can be fully resolved by recognizing that the fact that this relationship is denoted as "semantic" is not important because this is just a matter of notation.


In conclusion, I hope that I have provided adequate justification for why I believe the Chinese Room argument is incorrect. Of course, I am not studying philosophy at my university, so there is likely to be many flaws in the logic I have given here. Of course if you have any advice, you can provide it in the comments below. Personally, I am excited at the prospect of computers exhibiting human level intelligence, whether or not they have true understanding.






 
 
 

留言


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2020 by Pranav's Free Time. Proudly created with Wix.com

bottom of page