Multiple Realizability And The Theory Of Functionalism 

It has been argued that the mind operates similar to the inner workings of a machine; a system that functions according to certain rules that specify outputs for various inputs. Generalizing this idea of what makes something a particular mental state is not based on its physical constitution but its function. John Searle, a respected philosopher, attempted to object this idea of machine functionalism with the Chinese Room argument, suggesting that the formal symbol manipulation that is defined by the term computation, is neither necessary or sufficient for understanding and therefore cannot explain other mental states. Critics, however, suggest that machine functionalism can be defended against Searle’s objections on the basis that there are different degrees and forms of understanding, insofar as our intuitions could be unreliable. This paper will begin by analyzing the theory of Machine Functionalism, exploring Searle’s objections of the theory through his proposed Chinese Room argument, leading to the criticisms offered by other philosophers further defending the theory with support of the Systems Reply, Brain Simulator Reply, and the Other Minds Reply, concluding with Searle’s responses to the other proposed objections that triumph his original argument.

The theory of functionalism maintains that mental states are functional states characterized by its causal role. For example, a calculator is defined by its function to calculate and derive answers to mathematical equations. Its classification is not dependent on its material constitution. This could be applied to mental states insofar as a mental state is defined by its causal relationship to sensory inputs, behavioral outputs, and other mental states. An individual is considered to be in a particular mental state as long as it reflects the same functional organization. This could be further understood by an example involving pain; with a sensory input of tissue damage creating the mental state of pain while further producing the behavioral output of avoidance behavior, it can therefore be concluded that the individual would be in pain as explained by how the term is initially defined. The theory of Machine Functionalism is rooted with the same understanding that characterizations of consciousness can be determined by inputs, outputs, and causal mental states. However, mental states causal roles can be produced through mechanics. Machine functionalism suggests that the brain is a computing machine that processes symbols into consciousness. This was explored by the concept of a Turing machine that acts according to a list of rules. By accommodating this, people’s inner machines are viewed as probabilistic (example: Its probable that I will say ‘ouch’ when I experience pain, but I might not), suggesting that humans are probabilistic rather than deterministic.

John Searle’s Chinese Room example was an attempt to demonstrate how computers which go through the motions of ‘thought’ and ‘understanding’ can’t be considered to have consciousness. In this scenario, Searle imagines himself along in a room following a computer program for responding to Chinese characters. Searle understands nothing of Chinese, and yet, by following the program for manipulating the symbols just as a computer does, he produced appropriate strings of Chinese characters enough to fool those outside into thinking there is a Chinese speaker in the room. The Chinese room shows that computers operating on purely formal symbol manipulations are not understanding (Stanford Encyclopedia of Philosophy). From this thought experiment, Searle generalizes that formal symbol manipulation was insufficient and unnecessary for understanding since he was able to perform formal symbol manipulation with no understanding of Chinese and Searle understood English with no formal symbol manipulation of English symbols. From this understanding, it can therefore be concluded that machine functionalism does not explain mental states.

Critics could further defend Machine Functionalism through the argument of the Systems Reply. This conceded that the man in the room does not understand Chinese since his role is represented as a ‘part’ within a larger ‘system’. Similar to a central processing unit in comparison to the entirely of a computer system. This larger system consists of the database, the memory, and instructions that is required for responding accurately to the Chinese questions. This argument demonstrates that while the man running the program does not understand Chinese, the ‘system’ as a whole can understand. In a similar sense, no single component of the brain can understand anything while in insolation, however, the system of the brain as whole does provide the understanding. The Chinese Room scenario has its flaws in forcing the audient to take the prospective of the implementer, which ultimately fails to see the larger picture (Stanford Encyclopedia of Philosophy). Searle responds to this by suggesting that the man can internalize the entire system by memorizing all the instructions and database, however, when he leaves the room to the outside world, he could be conversing in Chinese yet he still would have no way to attach any meaning to the formal symbols. The man is now the entire system and yet he still would not be able understand Chinese. The System Reply fails to differentiate human thought from mere processing.

Another convincing counter-argument to Searle’s defense of the Chinese room is the Brain Simulator Reply. This suggest that the possibility that a machine can correctly simulate the exact brain functioning of the mind of a native Chinese speaker, simulating the actual neuron activity that would be happening in the brain. The machine and the brain would be understood as operating in the exact same manner. Machines operates with a whole set of programs operating in parallel in the similar manner that the actual brain presumably operates when processing language. In this case, we would have to say that the machine understood the Chinese, while refusing to say that would deny that the native Chinese speakers understood the stories (Stanford Encyclopedia of Philosophy). In response to this, Searle argues that even though a brain could have the exact wirings a native speaker, the program would still be in charge of telling which wirings to fire in response to its input; it is therefore obvious that there would still be no understanding of Chinese.

Related to the preceding, the Other Minds Reply suggest that there is no way to really prove that we ourselves have any control of our output (behavior) based on the input we are given. This reply demonstrates that the only way to judge knowledge is based on the behavior of the individual; as a machine were to give the response that would coherently be recognized as a human response, then we would have to attribute cognition to a human insofar as being attributed to a machine as well (Stanford Encyclopedia of Philosophy). Searle responds by suggesting that the question at hand is not why we believe other people have mental states, but rather what something has when we say it has mental states/understanding. Searle suggests that what we are attributing in attributing understanding to other minds, is much more than complex behavioral dispositions.

In conclusion, Searle believes that the Chinese room argument supports a much larger point, that although computers may be able to manipulate syntax to produce appropriate responses to natural language input, they do not understand the sentences they receive, for they cannot associate meanings with the words. Syntax is not itself sufficient or constitutive of semantics, therefore, refuting the theory of Machine Functionalism. As explored by some criticisms to Searle’s objections, the systems reply is refuted since it failed to differentiate human thought from mere processing. With the brain simulator reply, despite an exact replica of the brain of a native Chinese speaker, where the input is the same, there is still no proof that understanding of the individual is present and is merely the cause for their outputs. Lastly, the argument from the Other Minds Reply can also be refuted on the basis that that what we are attributing in attributing understanding to other minds, is much more than complex behavioral dispositions. Overall, there are many issues raised by this Chinese Room argument due to the misconceptions about the nature of meaning, its relation to syntax, and about the biological basis of consciousness.

10 December 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now