John Searle the Strong AI
John Searle’s described a philosophical position he identifies as strong AI as the perfectly programmed computer system consisting of appropriate outputs and inputs. With the accurate inputs and outputs, the system has a mind same as that of a human being. The definition is one of the popular and widely credited statement to claims that program systems can someday think and solve various problems (Starks, 2019). To support the definition, John established a remarkable discussion concerning the basis of cognitive science and artificial intelligence. The discussion was set in his well-recognized Chinese room argument in 1980 to counter scientific critics.
Effectiveness of Searle’s argument
John’s argument is much more effective and different from traditional A.I objective. First, unlike Descartes who suggests that the individual soul interacts with brain, he gives a better explanation. The scholar observes the materialism doctrine that souls do not exists in the practical world human being live in. The different forms of materialistic position evaluate strong artificial intelligence only claiming that a computer with accurate program would be mental. The word program is the emphasis of this context because any machine can implement a particular program thus providing an answer to the mind-body challenge. The concern is that a programed machine operates the correct software (Baron, 2017). Consequently, a human mind is taken as a software piece implemented by the human brain. Therefore, in principle it would be effective to code such a human-like program to a computer and create a mental machine.
Secondly, John response to the Chinese room argument that attempts to describe his strong A.I as false is effective and different. He argues that nobody can reveal the definition to be false because it is difficult to understands the program of the human mind. Also, no one can understand the brain to be a priori – before providing empirical tests. The main idea is to make a zombie – not mental machine with any kind of a program (Baron, 2017). In case such a machine would exist, the case of false strong A.I would be supported because no program would ever make it mental. Moreover, the philosopher provides an answer to how to make such a machine and to assess whether the machine contains thoughts or not by engaging ourselves into implementing the machine. If people implement the system, we would be in position to evaluate whether it is mental or not.
In addition, Searle demonstrates the strong A.I to offer an effective argument. He illustrates a case where a person is put in a closed facility that has two slots. From slot 1 the individual is provided with Chinese characters that he/she does not identify as words. The person does not know how to read Chinese words. However, the person has a big rulebook that he/she apply to come up with other Chinese characters from the ones provided (John, 2017). With the help of the rulebook the individual lastly splits out the new characters. In this case, it is the same with the computer program with an input which computes a task and lastly splits an output. John continues with the illustration and assumes that the rule book is such that individuals outside the room can talk with the person inside the room in Chinese.
For instance, they send a message ‘how are you?’ Following the rulebook, the individual would respond with a meaningful answer. People outside would even ask the person inside whether he understands Chinese and would answer yes. The fact is, the person following the rulebook to communicate does not understand Chinese, he just follows rules. The important part of the case is that with a particular rulebook (program) one would never comprehend meanings of characters he/she manipulates (John, 2017). Searle has built a program that can never be mental. Transforming the program interprets to only changing the rulebook which clearly indicates that it doesn’t boost the understanding.
The Philosopher provides further illustration to the case above with counter responses for his position. In response 1, the fact is that the individual put in the room does not comprehend the story, he is just part of the entire system which also doesn’t understand. However, the system offers a large ledger containing written rules, paper and pencil for performing calculations, Chinese symbols and data banks. Now, getting the language is not assigned to the individual alone but to the whole system which the person is part of.
With the system, the person will internalize its components (John, 2017). He/she memorizes Chinese symbols data banks, ledger rules and carries out all calculations in his/her head. Later he/she incorporates the whole system such that none of the element is left aside. The person can even get out of the facility and work outdoors. Neither does he/she nor the program understands the Chinese language. Even when the individual is presenting an answer to a question he/she feels embarrassed because he/she doesn’t understand the answer.
The idea is that while the individual fails to understand Chinese, somehow bits of paper and person’s conjunction might comprehend the language. As per the version of the idea, while the person is internalized in between it seems that all kind of non-cognitive elements are going to change to being cognitive (John, 2017). For instance, there is a point of explanation at which the stomach performs some information processing just like computer programs. However, we consider it not to have any understanding though without it the human system would be incomplete. Also, without other body organs like heart, liver e.tc. human existence would be null and void. It’s the same way with inputs and outputs, that is the Chinese language system facilitates communication but not understanding.
In response 2, John supposes that a computer is placed inside a robot. In such a case the program does not only accept formal inputs of symbols and offers output as formal symbols. The computer also runs the robot to walk, perceive, hammer nails and do other tasks. Therefore, the robot would have les, hands and a TV-like camera attached to help it see and perform various activities. All operations would be controlled by the computer brain such that the robot reveals mental state and genuine understanding.
One of the unique feature to identify is that the system tacitly portrays that cognition is not only an aspect of formal symbol manipulation. It also interacts with the real world by responding to activities and contributing to work output (John, 2017). However, the addition of extra motor and perpetual capacities contributes nothing to the way of understanding. Now suppose that instead of placing a computer system inside a robot, a person is locked inside a room. In the case of Chinese, he is provided with extra Chinese symbols with English instructions to match the symbols and give the output.
Consequently, symbols unknown to the individual originates from the robot’s TV camera and symbols the person is giving out functions to drive motors in the robot to move arms and legs. It is therefore essential to insist all that the individual is doing is manipulate formal symbols. The person does not understand how the movements in the robots are produced. He is just there to get information from the robot’s TV camera and give directions to robot’s motor apparatus (John, 2017). The person thus acts as the robot’s homunculus because he cannot explain what is happening. The only thing he understands are manipulation rules.
In the fourth response which is a combination reply, Searle is more convincing about the strong artificial intelligence. He describes that when all the cases are combined, the system becomes more decisive and convincing. For instance, a robot consisting of a brained-shaped software installed in its cranial cavity and programmed with all human brain synapses (John, 2017). It makes the entire character of the robot to be similar to that of human beings. Thinking of the entire structure as one and not just a computer comprising of inputs and outputs, the intentionality of the system would be ascribed.
In such a situation, it would be irresistible and rational to accept some hypothesis. One of the hypothesis would be, as long as human beings understands nothing concerning robot’s operations it has intentionality. Secondly, apart from the behavior and appearance of the robot, other combinational components are really irrelevant. In case a robot with varying human behaviors can be built, intentionality would be attributed to it. The reason being people would not require to understand in advance that the system’s brain was kind of analogue to that of human brain.
According to Newell, the functionality of a physical feature system is the essence of mental. However, intentionality attributions built into the robot have nothing associated with formal programs. They are just established on the assumption that in case the robot looks and acts enough like human beings, it would be proven to have mental state. If that would be the case, inner mechanism of the robot would be in a position to produce mental states (John, 2017). But if people understood independently the way to explain system’s behavior in absence of such assumptions, intentionality would not be attributed to it.
Suppose now humans understood that robot’s character was wholly accounted for that a person inside it was getting non interpreted formal symbols from the sensory of the robot and conveying non interpreted formal signs to robot’s motor mechanism. Also, assume that the individual was manipulating the symbols according to bunch of rules set. Furthermore, suppose the individual passing the symbols does not understand none of the operations of the robot, he just follows rules about which operation to conduct on what meaningless symbol. I such a situation, the robot would be considered to be an ingenious mechanical dummy.
Therefore, the prove that the system has a brain would now be unnecessary and unwarranted. The reason being there would not be any rationale to ascribe robot’s intentionality to the robot or its components. The only element to ascribe intentionality would be individual’s act to manipulate symbols. The manipulation of symbols continues to match input and output correctly. However, the only actual intentionality locus is the man and he do not understand any of the correct intentional states (John, 2017). For example, the person does not perceive the thing that comes into robot’s eye. Also, he does not attempt to move the arm of the robot and does not comprehend any remarks designed to or by the robot.
To understand the point deeper, the philosopher contrasts the robot case with certain members of primitive species like monkeys and apes. Human being see it completely natural to ascribe intentionality to these species and other domestic animals such as cats and dogs. The reasons which mankind find the kind of intentionality natural are two. One, animal’s sense of behavior cannot be established without ascription of intentionality. Second, animals are made of stuff similar to human beings – the nose, eyes, legs, skin etc.
Given the coherence of animal’s conduct together with the assumption of similar staff, it is assumed that beasts must possess a mental condition underlying their behavior. Consequently, mental conditions must be made by mechanisms built out of stuff which is the same as humans’ stuff. Similar assumptions can be claimed to robots but knowing that behavior was as a result of some formal program and the physical structure was irrelevant, the assumption of intentionality would be abandoned (John, 2017).
In conclusion, Searle’s argument concerning strong artificial intelligence attempts to reveal that a digital computer system executing a particular program cannot be portrayed to have an understanding, mind or consciousness (Maruyama, 2016). Moreover, regardless of the human-like behavior or intelligent the program can never be compared to human mental state. However, with modern computers with incredibly powerful software, machines can become conscious.
He also tries to show that a computer is not just a mere instrument into debating about the mind, it is rather an accurately programmed machine that can be said to be a mind. Being in a mind, computers provided with the correct programs can be described to literary comprehend and have other cognitive conditions. In addition, he insists that any type of an artifact which produced mental conditions would be in a position to duplicate unique causal brain powers by simply operating a formal program. But how the human brain actually creates mental phenomena can never be solely through operating a computer program.
However, John fails to show that programs are insufficient and not constitutive for minds. They lack the basic common sense such as touch, smell, sight, taste, hear and reasoning which constitutes human brains to cause minds (Hildt, 2019). Also, he does not discuss how computer programs are linked to individual reasoning. Whether they can solve a problem in their own capacity without being subjected to input symbols. Whether they can come up with ideas independently which human being would rely on (Dos, 2019). Furthermore, the philosopher has not shown how computers adapt to program changes such as software updates and installation of new software.