Minggu, 15 Juli 2018

Sponsored Links

Might Could Science: The Chinese Room â€
src: mightcouldstudios.com

The Chinese space argument states that a program can not give a computer a "mind", "understanding" or "consciousness", regardless of how intelligent or human-like the program can make a computer behave. This argument was first presented by the philosopher John Searle in his paper, "Mind, Brain, and Program", published in Behavioral and Brain Sciences in 1980. This has been widely discussed in later years. The essence of the argument is a mind experiment known as Chinese space .

The argument is directed towards the philosophical position of functionalism and computationalism, which argues that the mind can be seen as an information processing system operating on formal symbols. Specifically, this argument is meant to deny Searle's position calling Strong AI :

Properly programmed computers with the right input and output will thus have thoughts in the exact same sense that humans have thoughts.

Although initially presented as a reaction to the claims of artificial intelligence (AI) researchers, this is not an argument against AI research purposes, as it does not limit the amount of intelligence that a machine can display. This argument applies only to digital computers running programs and does not apply to machines in general.


Video Chinese room



Chinese space thought experiment

Searle's mind experiment begins with this hypothetical premise: let's say artificial intelligence research has built a computer that behaves as if it understands Mandarin. It takes Chinese characters as input and, by following the computer program instructions, generates other Chinese characters, which are displayed as output. Suppose, says Searle, that this computer does so convincingly so comfortably passes the Turing test: it convinces a human Chinese speaker that the program itself is a living Chinese speaker. For all the questions the person asks, he makes the right response so that any Chinese speaker will be sure that they are talking to other Chinese-speaking people.

The question Searle wants to answer is this: does the machine literally "understand" the Mandarin language? Or is it just simulation the ability to understand Chinese? Searle calls the first position "strong AI" and the last "weak AI".

Searle then thought that he was in a closed room and had a book with an English version of the computer program, along with enough paper, pencils, eraser, and filing cabinet. Searle can accept Chinese characters through slots on the door, process them according to program instructions, and produce Chinese characters as output. If the computer has passed the Turing test in this way, here, says Searle, that he will do it too, simply by running the program manually.

Searle insists that there is no fundamental difference between the role of the computer and itself in the experiment. Each one just follows the program, step by step, generating behavior which is then interpreted as showing intelligent conversation. However, Searle will not be able to understand the conversation. ("I do not speak a word of Chinese," he said.) Therefore, he argues, computers will not be able to understand the conversation either.

Searle argues that, without "understanding" (or "intentionality"), we can not describe what machines do as "thinking" and, as it does not think, it has no "mind" in terms like the normal sense of the word. Therefore, he concludes that "strong AI" is wrong.

Maps Chinese room



History

Gottfried Leibniz made a similar argument in 1714 against a mechanism (a position that the mind is a machine and nothing more). Leibniz uses mind experiments to expand the brain to a size of one mill. Leibniz found it difficult to imagine that a "mind" capable of being "perceived" could be built only by using mechanical processes. In 1974, Lawrence Davis envisioned doubling the brains of people-run telephone and office lines, and in 1978 the Ned Block envisioned the entire Chinese population engaged in such brain simulations. This thought experiment is called the Chinese brain, as well as "The Chinese" or "The Chinese Fitness Center".

The Chinese Chamber's argument was introduced in 1980 Searle's paper "Mind, Brain, and Program", published in Behavioral and Brain Sciences. This eventually became the journal's most influential "target article", generating a large number of comments and responses in the next few decades, and Searle continues to defend and correct arguments in many papers, popular articles and books. David Cole writes that "the Chinese Space argument may have been the most discussed philosophical argument in cognitive science to emerge in the last 25 years".

Much of the discussion consists of an attempt to disprove it. "The vast majority," wrote BBS editor Stevan Harnad, "still think that the China Chamber Argument is wrong." The sheer number of literature that has evolved around him inspired Pat Hayes to comment that the field of cognitive science must be redefined as "an ongoing research program to show Searle's Chinese Room Argument to be wrong".

Searle's argument has become "something classical in cognitive science," according to Harnad. Varol Akman agrees, and has described the original paper as "an example of clarity and philosophical purity".

Ada In Chinese room!! by Real-Ada on DeviantArt
src: orig00.deviantart.net


Philosophy

Although the Chinese Space argument was originally presented in reaction to the claims of AI researchers, the philosophers have considered it an important part of the philosophy of thought. This is a challenge to functionalism and computational theory of the mind, and is related to questions like mind-body problems, other mind problems, symbol-grounding problems, and difficult problems of consciousness.

Strong AI

Searle identifies the philosophical position he calls "powerful AI":

Properly programmed computers with the right input and output will thus have thoughts in the exact same sense that humans have thoughts.

Definition depends on the difference between simulating the mind and actually having thoughts. Searle writes that "according to AI Strong, the correct simulation is the mind.According to AI Weak, the correct simulation is the mind model."

This position is implied in some early AI researchers and analyst statements. For example, in 1955, AI founder Herbert A. Simon stated that "it is now in the thinking machine world, which learns and makes" and claims that they have "solved a noble mind-body problem, explaining how a system consists of matter can have the nature of mind. "John Haugeland writes that" AI just wants the original article: machine with mind , in full and literal sense.This is not science fiction, but real science, based on the deep theoretical concept of being brave: that is, we, in root, own computer . "

Searle also considers the following positions for strong AI supporters:

  • The AI ​​system can be used to explain the mind;
  • The study of the brain is irrelevant to the study of the mind; and
  • The Turing test is sufficient to establish the existence of a mental state.

AI strong as computationalism or functionalism

In a more recent presentation of the Chinese space argument, Searle has identified "powerful AI" as "computer functionalism" (a term he attributes to Daniel Dennett). Functionalism is a position in the philosophy of modern thought that states that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and with the outside world. Since computer programs can accurately represent functional relationships as relationships between symbols, computers can have a mental phenomenon if running the right program, according to functionalism.

Stevan Harnad argues that Searle's powerful portrayal of AI can be reformulated as "recognizable principles of computationalism, a position (unlike" powerful AI ") held by many thinkers, and hence one that should be disputed. " Computationalism is a position in the philosophy of thought that argues that the mind can be accurately described as an information processing system.

Each of the following, according to Harnad, is the "principle" of computationalism:

  • the mental state is the state of computing (which is why computers can have mental states and help to explain the mind);
  • The computing state is implementation-independent - in other words, it is the software that determines the state of computing, not hardware (which is why the brain, being hardware, is irrelevant); and that's
  • Since implementations are not important, the only important empirical data is how the system functions; then the Turing test is definitive.

Strong AI vs biological naturalism

Searle holds a philosophical position which he calls "biological naturalism": that consciousness and understanding require certain biological machines found in the brain. He writes "the brain causes the mind" and that "the actual human mental phenomenon depends on the actual physical-chemical properties of the real human brain". Searle argues that this machine (known as neuroscience as "the correlation of the nerves of consciousness") must have some unexplained "causal forces" that enable the experience of human consciousness. Searle's conviction of the existence of these forces has been criticized.

Searle disagrees with the idea that machines can have awareness and understanding, because, as he writes, "we are the right machines." Searle argues that the brain is, in fact, a machine, but the brain raises awareness and understanding using non-computing machines. If neuroscience is able to isolate the mechanical processes that generate awareness, then Searle provides that it is possible to create machines that have awareness and understanding. However, without the special machine needed, Searle did not believe that consciousness could happen.

Biological naturalism implies that one can not determine whether the experience of consciousness occurs only by examining how a system functions, because a specific machine of the brain is essential. Thus, biological naturalism directly opposes both behaviorism and functionalism (including "computer functionalism" or "powerful AI"). Biological naturalism is similar to the theory of identity (a position that states mentally "identical to" or "composed of" neurological events); However, Searle has special technical objections to identity theory. Searle's biological naturalism and powerful AI both oppose the Cartesian dualism, the classical idea that the brain and mind are made of different "substances". Indeed, Searle accused the powerful AI of dualism, writing that "powerful AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain does not matter."

Awareness

Searle's original presentation emphasized "understanding" -that is, a mental state with what the philosophers call "intentionality" - and not directly addressing other related ideas like "consciousness". However, in a more recent presentation, Searle enters consciousness as the real target of the argument.

The model of computational awareness is not enough for self-awareness. The computational model for awareness stands for awareness in the same way a computational model of whatever stands for the modeled domain. No one thinks that the storm rain computing model in London will make us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. This is the same error in both cases.

David Chalmers writes, "it is quite clear that consciousness is the root of the problem" of Chinese space.

Colin McGinn argues that Chinese space provides strong evidence that the difficult problem of consciousness is essentially insoluble. The argument, to be clear, is not about whether the machine can be conscious, but about whether it (or anything else in this case) can be shown to be conscious. It is clear that other methods of investigating Chinese residents have the same difficulty in principle as the exchange of questions and answers in Mandarin. It is not possible to say whether a conscious agent or an intelligent simulation inhabits the room.

Searle argues that this only applies to an observer outdoors. The essence of mind experimentation is to put someone in the room, where they can directly observe the operation of consciousness. Searle claims that from his standpoint in the room there is nothing he can see that can raise awareness, other than himself, and clearly he has no Mandarin-speaking mind.

Applied Ethics

Patrick Hew used the Chinese Chamber's argument to deduce the requirements of command and military control systems if they wanted to preserve the commander's moral agent. He drew an analogy between a commander in their command center and the people in the Chinese Room, and analyzed it under Aristotle's reading of 'obligatory' and 'ignorance' ideas. Information can be 'down converted' from meaning to symbol, and manipulated symbolically, but a moral agency can be undermined if there is 'conversion' to inadequate meaning. Hew cites the example of the incident USS Vincennes .

Home Design: Chinese Room Interior Design Ideas â€
src: www.linkcrafter.com


Computer science

The argument of Chinese space is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their field. However, some concepts developed by computer scientists are essential to understanding arguments, including symbol processing, Turing machines, Turing completeness, and Turing tests.

Strong AI vs AI Research

The Searle argument is not usually considered a problem for AI research. Stuart Russell and Peter Norvig observed that most AI researchers "do not care about the powerful AI hypothesis - as long as the program works, they do not care whether you call it a simulation of intelligence or real intelligence." The main mission of artificial intelligence research is simply to create a useful system that acts intelligently, and it does not matter if the intelligence is "just" a simulation.

Searle does not agree that AI research can create a machine that is capable of behaving very intelligently. The Chinese space argument leaves the possibility that a digital machine can be built that acts more intelligently than a person, but does not have thought or deliberation in the same way that the brain does. Indeed, Searle writes that "the Chinese space argument... considers full success on the part of artificial intelligence in simulating human cognition."

"AI strong" Searle should not be equated with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the number of intelligence displayed by machines, while Searle's argument does not limit this. Searle argues that even a very intelligent machine will have no mind and consciousness.

Turing test

The Chinese room applies the Turing test version. Alan Turing introduced the test in 1950 to help answer the question "can machine think?" In the standard version, a human judge engages in natural language conversations with humans and machines designed to produce an indistinguishable performance with humans. All participants are separated from each other. If a judge can not tell the machine from a human, the machine is said to have passed the test.

Turing then considers every possible objection to the proposition "machines can think", and finds that there is a simple and clear answer if the question is not understood in this way. However, he does not intend to test to measure the existence of "consciousness" or "understanding". He did not believe it was relevant to the problem he was conveying. He writes:

I do not want to give the impression that I think there is no mystery about consciousness. There is, for example, something of a paradox connected with every attempt to localize it. But I do not think that these mysteries need to be solved before we can answer the questions that concern us in this paper.

For Searle, as a philosopher who investigates the nature of mind and consciousness, this is a relevant mystery. The Chinese room is designed to show that the Turing test is not enough to detect the existence of consciousness, even if the room can behave or function as the conscious mind.

Symbol processing

Chinese space (and all modern computers) manipulate physical objects to perform calculations and perform simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine as a physical symbol system. This is also equivalent to the formal system used in the field of mathematical logic.

Searle stresses the fact that the manipulation of such symbols is syntactic (borrowing the term from the study of grammar). Computers manipulate symbols using the form of syntax rules, without the knowledge of semantic symbols (meaning, meaning).

Newell and Simon have suspected that the physical symbol system (like a digital computer) has all the machines necessary for "general acts of intelligence", or, as is known today, artificial general intelligence. They frame this as a philosophical position, the hypothesis of a system of physical symbols: "A system of physical symbols has the necessary and sufficient means for common intelligent action." The Chinese room argument does not dispute this, because it is framed in terms of "intelligent action", ie the external behavior of machines, rather than the presence or absence of understanding, consciousness and thought.

Chinese Room and Turing Completeness

The Chinese room has a design analogous to modern computers. It has the Von Neumann architecture, which consists of programs (instruction books), some memory (paper and file cabinets), CPUs that follow the instructions (human), and the means to write symbols in memory (the pencils and erasers). A machine with this design is well known in theoretical computer science as "Turing complete", as it has the machine required to compute whatever a Turing machine can do, and therefore it is able to simulate step by step from other digital machines, remembering enough memory and time. Alan Turing writes, "all digital computers have the same meaning." The widely accepted Church-Turing thesis states that any function that can be calculated by effective procedures can be computed by a Turing machine.

Turing's completeness of Chinese space implies that it can do whatever other digital computers can do (albeit far, much slower). Thus, if Chinese space is not or can not contain Chinese-speaking thoughts, then no other digital computer can load the mind. Some replies to Searle begin by stating that the room, as described, can not have a Chinese-speaking mind. The arguments of this form, according to Stevan Harnad, are "no denial (but affirmation)" of the Chinese space argument, because these arguments really imply that no digital computers can have thoughts.

There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room can not simulate all the capabilities of a digital computer, such as being able to determine the current time.

Chinese Living Room by teknikarsitek on DeviantArt
src: orig00.deviantart.net


Complete argument

Searle has produced a more formal version of the argument in which Chinese Space is part of it. He presented the first version in 1984. The version given below is from 1990. The only part of the argument that should be controversial is A3 and this is the point where Chinese thinking space is meant to prove.

He started out with three axioms:

(A1) "Programs are formal (syntax)."
A program uses syntax to manipulate symbols and not pay attention to semantic symbols. He knows where to put the symbols and how to move them, but does not know what they stand for or what they mean. For the program, the symbols are just physical objects like the others.
(A2) "The mind has a mental content (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what they represent.
(A3) "Syntax is by itself unconstitutional or insufficient for semantics."
This is what Chinese space experiments mean to prove: Chinese space has a syntax (because there is a man in there moving symbols around). The Chinese space does not have semantics (because, according to Searle, there is no one or nothing in the room that understands the meaning of symbols). Therefore, having the syntax is not sufficient to generate semantics.

Searle argues that this leads directly to this conclusion:

(C1) Programs are not constitutive or insufficient for the mind.
This should follow without the controversy of the first three: The program has no semantics. The program only has syntax, and the syntax is not sufficient for semantics. Every thought has semantics. Therefore there is no program to mind.

This argument is intended to show that artificial intelligence never produces machines with the mind by writing programs that manipulate symbols. The rest of the argument addresses the different issues. Does the human brain run the program? In other words, is the theory of mind computing true? He started with an axiom that was meant to express a basic, modern scientific consensus about mind and brain:

(A4) The brain causes the mind.

Searle claims that we can obtain "soon" and "trivial" that:

(C2) Any other system capable of causing the mind must have a causal (at least) equivalent power to the brain.
The brain must have something that causes the mind to exist. Science has not yet determined exactly what it is, but it must exist, because the mind exists. Searle calls this "causal force". "Causal force" is whatever the brain uses to create thought. If anything else can cause the mind to exist, it must have "equal causal force". The "Equal Causal Strength" is anything else that can be used to create thoughts.

And from here he gets a further conclusion:

(C3) Any artifact that produces a mental phenomenon, an artificial brain, must be able to duplicate a particular brain-causing force, and it can not do that simply by running a formal program.
This follows from C1 and C2: Since no program can generate the mind, and "equivalent causal force" produces the mind, the program does not have "equivalent causal force".
(C4) The way a human brain actually generates mental phenomena can not simply be due to running a computer program.
Since the program does not have "equivalent causal force", "equivalent causal force" produces the mind, and the brain produces the mind, the brain does not use the program to generate thought.

Back to the Rapture - Part 4 â€
src: static1.squarespace.com


Reply

Replies to the Searle argument can be classified according to what they claim to show:

  • Those who identify who speak Mandarin
  • Those who show how meaningless symbols can become meaningful
  • Those who suggest that Chinese space should be redesigned in some way
  • Those who argue that Searle's argument is misleading
  • Those who argue that the argument makes false assumptions about the subjective conscious experience and therefore does not prove anything

Some arguments (robots and brain simulations, for example) fall into several categories.

System and virtual mind reply: find mind

These replies try to answer the question: since the man in the room does not speak Mandarin, where is the "mind" that does it? These answers address key issues of mind vs body and vs. simulation. reality. All the answers that identify the mind in the room is the "system reply" version.

System reply
The "basic" system's answer states that it is the "whole system" that understands Mandarin. While the person only understands English, when he is combined with programs, scratch paper, pencils and filing cabinets, they form a system that can understand Mandarin. "Here, understanding is not to be derived from the individual alone, but rather ascribed to the whole system in which it is a part of" Searle explained. The fact that humans do not understand Mandarin is irrelevant, because only the whole system is important.

Searle notes that (in this simplified version of the answer) "system" is nothing more than a collection of ordinary physical objects; it provides the power of understanding and awareness to the "composite of the person and the pieces of paper" without trying to explain how this heap of things has become a conscious and thoughtful being. Searle argues that no sensible person should be satisfied with the answer, unless they are "under the grip of ideology;" In order for this answer to be far-fetched, one must take for granted that consciousness can be the product of an information processing system, and does not require anything resembling the real biology of the brain.

Searle then responds by simplifying this list of physical objects: he asks what happens if he remembers the rules and keeps track of everything in his head? Then the whole system consists only of one object: man himself. Searle argues that if the person does not understand Mandarin then the system does not understand Mandarin because now "system" and "that person" describe the exact same object.

Critics of Searle's response argue that the program allows men to have two thoughts in one's head. If we think "mind" is a form of information processing, then computational theory can explain two calculations that occur at once, namely (1) calculation of universal programability (which is a function used by people and recorders) matter independently of the program's specific content) and (2) Turing machine calculations described by the program (used by all including specific program). Computational theories thus formally explain the open possibility that the second calculation in the Chinese Space could include an equivalent human semantic understanding of Chinese input. The focus is on the Turing machine program rather than on the person. However, from the point of view of Searle, this argument is circular. The question in question is whether consciousness is a form of information processing, and this answer requires us to make that assumption.

A more sophisticated version of the reply system tries to identify more precisely what "systems" are and they differ exactly how they describe them. According to this answer, "Chinese speaking minds" can be things like: "software", "program", "running program", simulation of "conscious nerve correlation", "functional system", "simulated mind" , "emerging properties", or "virtual thoughts" (Marvin Minsky's version of the reply system, described below).

Virtual mind replies
The term "virtual" is used in computer science to describe objects that appear to be "on" computers (or computer networks) simply because the software makes them appear to exist. The objects "inside" the computer (including files, folders, etc.) are all "virtual", except for computer electronic components. Similarly, Minsky argues, computers may contain "minds" that are virtual in the same sense as virtual machines, virtual communities and virtual reality.
To clarify the difference between the simple system answers given above and the answer to virtual thought, David Cole notes that two simulations can run on one system at a time: one is Chinese and who speaks Korean. Although there is only one system, there can be many "virtual thoughts", so "system" can not be a "mind".

Searle replied that such thoughts were, at best, a simulation, and wrote: "Nobody thought that a computer simulation of a five-alarm alarm would burn the environment down or that a computer simulation of a rainstorm would make us all soaked." Nicholas Fearn replied that, to some extent, simulations are as good as the real ones. "When we call the pocket calculator function on a desktop computer, a pocket calculator image appears on the screen We do not complain that 'it's not really calculator', because the device's physical attributes are not a problem." The question is, does the human mind like a pocket calculator, basically consist of information? Or is it like a rainstorm, something other than a computer, and can not be fully realized by computer simulation? (The simulation problem is also discussed in synthetic intelligence articles.)

These answers provide an explanation of who exactly is a person who understands Mandarin. If there is anything other than a man in a room who can understand Mandarin, Searle can not argue that (1) the man does not understand Mandarin, therefore (2) there is nothing in the room who understands Mandarin. This, according to those who make this reply, suggests that Searle's argument fails to prove that "strong AI" is wrong.

However, the reply, by itself, does not prove that strong AI is true, either: they do not provide evidence that the system (or virtual mind) understands Mandarin, apart from the hypothesis that Turing's Test passes. As Searle wrote, "the reply system simply asks questions by insisting that the system should understand Mandarin."

Robots and semantics answer: find meaning

As far as people are concerned in the room, the symbols are nothing but "streaks". But if the Chinese space really "understands" what it says, then the symbol must get its meaning from somewhere. These arguments try to connect symbols to the things they symbolize. This reply answers Searle's concerns about intentionality, the cornerstone of symbols and syntax vs. semantics.

Robot replies
Suppose that instead of a room, the program is placed into a robot that can roam and interact with its environment. This will allow a "causal relationship" between the symbols and the things they represent. Hans Moravec commented: "If we can transplant the robot into a reasoning program, we will not need someone to give it another meaning: it will come from the physical world."
Searle's answer is to assume that, unbeknownst to the individual in Chinese space, some inputs come directly from cameras mounted on the robot, and some output is used to manipulate robotic arms and legs. However, the person in the room is still just following the rules, and does not know what the symbol means. Searle writes "he did not see what happened to the robot eye." (See Mary's room for similar thought experiments.)
Derived meaning
Some people respond that the room, as Searle describes, is connected to the world: through Chinese speakers who "speak" with and through programmers who design knowledge bases in their archives. cabinet. The Searle symbols manipulate already mean , they are meaningless to him .
Searle says that symbols only have "derived" meanings, such as the meaning of words in the book. The meaning of the symbol depends on the conscious understanding of the Chinese speakers and the programmers outdoors. The room, according to Searle, has no understanding of itself.
Confidential knowledge/contextual reply
Some argue that the meaning of symbols will come from a large "background" of random knowledge encoded in programs and filing cabinets. This will provide a "context" that will symbolize their meaning.
Searle agrees that this background exists, but he does not agree that it can be built into the program. Hubert Dreyfus also criticized the idea that "background" can be represented symbolically.

For each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, it is still indoors manipulating symbols according to the rules. His actions are syntactic and this can never explain to him what the symbol means. Searle writes "syntax is not enough for semantics."

However, for those who accept that Searle's actions simulate thoughts, apart from his own thoughts, the important question is not what the symbols mean to Searle, what matters is what they mean to virtual mind. While Searle is trapped indoors, the virtual mind does not: it connects to the outside world through the Chinese speakers it talks about, through programmers who give it world knowledge, and through cameras and more. sensors that robots can provide.

Brain simulation and koneksionis reply: redesigning space

This argument is all versions of system replies that identify specific type systems as important; they identify some special technologies that will create a conscious understanding in a machine. (Note that "robot" and "common sense knowledge" answer above also define a particular type of system as important.)

Brain simulator reply
Suppose the program is simulated in fine detail action of every neuron in the brain of a Chinese speaker. This reinforces the intuition that there will be no significant difference between program operations and the operation of the living human brain.
Searle replied that such simulations do not reproduce important features of the brain - its causal and deliberate circumstances. Searle insisted that "human mental phenomena depend on the actual physical-chemical properties of the actual human brain." In addition, he argues:

[I] assume that instead of a monolingual man in the room dragging the symbols, we have people who operate a series of elaborate water pipes with the connecting valves. When the man receives Chinese symbols, he finds out in the program, written in English, that the valve must be switched on and off. Each water connection corresponds to the synapses in the Chinese brain, and the whole system is rigged so that after doing all the right dismissal, that is, after turning on all the right faucets, the Chinese answer appears at the end of the pipe series. Now where is the understanding in this system? It takes China as input, it simulates the formal structure of Chinese brain synapses, and it gives China as output. But that person certainly does not understand Mandarin, nor is there a water pipe, and if we are tempted to adopt what I think is an absurd outlook that no matter what the relationship between humans and water pipes understands, remember that in principle humans can internalize the formal structure of water pipes and do all the "removal of neurons" in his imagination.

Two variations on the brain simulator reply are:
Chinese Brain
What if we ask every Chinese citizen to simulate a neuron, using a telephone system to simulate the connection between axons and dendrites? In this version, it seems clear that no individual understands what the brain is saying.
Brainstorming scenario
In this case, we were asked to imagine that engineers have created a small computer that simulates the action of individual neurons. What will happen if we replace one neuron at a time? Replacing one clearly will not change awareness. Replacing everything will create a digital computer that simulates the brain. If Searle is true, conscious awareness must be lost during the procedure (either gradually or simultaneously). Searle's critics argue that it would be pointless during the procedure when he can claim that the conscious consciousness ends and the simulation without thinking begins. Searle predicts that, while undergoing a brain prosthesis, "you find, to your fullest amazement, that you completely lose control of your external behavior.You find, for example, when doctors test your vision, you hear them say 'We hold objects red in front of you; please tell us what you see. 'You want to shout, "I can not see anything. I'm really blind. "But you hear your voice say in a way that's completely out of your control, 'I see the red object in front of me.' [...] [Y] our conscious experience gradually shrinks into nothing, while observable external behavior remains the same. "(See Theus of Theseus for similar thought experiments.)
Connectionist response
Closely related to brain simulator answers, it claims that a very large parallel connectionist architecture will be able to understand.
Combination reply
This response combines robotic replies with brain simulation replies, arguing that the simulation of the brain connected to the world through the robotic body can have thoughts.
Many mansions/Wait until next year reply
Better technology in the future will allow the computer to understand. Searle agrees that this is possible, but considers this point irrelevant. The argument is that machines that use programs to manipulate formally defined elements can not produce an understanding. The Searle argument, if true, only overrides this particular design. Searle agrees that there may be other designs that will cause the machine to have a conscious understanding.

These arguments (and robots or answers to plausible knowledge) identify some special technologies that will help create a conscious understanding in a machine. They can be interpreted in two ways: whether they claim (1) this technology is necessary for awareness, Chinese space is not or can not apply this technology, and therefore Chinese space can not pass the Turing test or (even if it does) conscious understanding. Or they may claim that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to make it.

In the first case, where features such as a robotic body or connectionist architecture are needed, Searle claims that a strong AI (as he understands it) has been abandoned. The Chinese space has all the elements of Turing's complete engine, and thus is able to simulate any digital computing. If Searle's room can not pass the Turing test then no other digital technology can pass the Turing test. If Searle's room could pass the Turing test, but still have no thoughts, then the Turing test is not enough to determine if the room has a "mind". Either way, it denies one or the other from a position Searle considers as "powerful AI", proving his argument.

Brain arguments specifically reject strong AIs if they assume that there is no simpler way to describe the mind than to make the program as mysterious as the brain. He writes, "I think the whole idea of ​​a strong AI is that we do not need to know how the brain works to know how the mind works." If the calculations do not give the explanation of the human mind, then a strong AI has failed, according to Searle.

Other critics argue that the room as described by Searle, in fact, has a mind, but they argue that it is hard to see - Searle's description is true, but misleading. By redesigning the room more realistically they hope to make it clearer. In this case, this argument is used as an attraction for intuition (see next section).

In fact, the room can be easily redesigned to weaken our intuition. The Blockhead Ned Block argument shows that this program can, in theory, be rewritten into a simple lookup table of form rules "if the user writes S , reply with P and goto X". At least in principle, each program can be rewritten (or "reproduced") into this form, even a brain simulation. In a blockhead scenario, the entire mental state is hidden in the letter X, which represents the memory address - the number associated with the next rule. It is hard to imagine that a moment of one's conscious experience can be captured in a single large number, but this is precisely the claim of "strong AI". On the other hand, such lookup tables will be very large (to the point that it is physically impossible), and those countries can be very specific.

Searle argues that however programs are written or however machines are connected to the world, the mind is being simulated by a simple step-by-step digital (or machine) machine. These machines are always the same as the men in the room: they do not understand anything and do not speak Mandarin. They just manipulate symbols without knowing what they mean. Searle writes: "I can have a formal program you like, but I still do not understand anything."

Speed ​​and complexity: appeal to intuition

The following argument (and the intuitive interpretation of the argument above) does not directly explain how a Chinese-speaking mind can exist in Searle's room, or how the symbol he manipulates can be meaningful. However, by raising doubts about Searle's intuition, they support other positions, such as system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusions are clear by undermining the intuition that needs certainty.

Some critics believe that Searle's argument is entirely dependent on intuition. Ned Block writes "The Searle argument relies on its power on an intuition unthinkable by a particular entity." Daniel Dennett describes the Chinese space argument as a misleading "intuition pump" and writes "Searle's mind experimentation is dependent, illegitimate, on simple simple cases, irrelevant cases, and drawing 'clear' conclusions from it.

Some of the above arguments also serve as an appeal to intuition, especially those intended to make it seem more plausible that the Chinese room contains thoughts, which can include robots, reasonable knowledge, brain simulations and koneksionis replies. Some of the answers above also address specific complexity issues. Connectionist answers emphasize that a functioning artificial intelligence system must be as complex and aligned with the human brain. The sensible answer of knowledge emphasizes that any program that passes the Turing test must be "an extraordinarily flexible, sophisticated, multi-layered system, full of 'world knowledge' and meta-knowledge and meta-knowledge", as Daniel Dennett explains.

The speed and complexity of reply
The speed with which the human brain processes information (by some estimates) 100 billion operations per second. Some critics point out that the man in the room may take millions of years to answer simple questions, and will need a "filing cabinet" with astronomical proportions. This brings the clarity of Searle's intuition to doubt.

A very clear version of the speed and complexity of the answers is from Paul and Patricia Churchland. They proposed this analog thought experiment:

the bright space of Churchland
"Consider a dark room containing a man holding a magnet of a rod or a loaded object.If the man pumps the magnet up and down, then, according to Maxwell's theory of artificial luminance (AL), it will initiate a diffuse electromagnetic circle, thus it will be luminous, but because all of us who have been playing with a magnet or ball know well, their strength (or other power in this case), even when driven does not produce luminance at all.It is unimaginable that you may be real luminance just by moving the power around! "The problem is he has to wave magnets up and down like 450 trillion times per second to see anything.

Stevan Harnad is very critical of the speed and complexity of the rewards as they deviate beyond our intuition. He wrote "Some have made a cult of speed and time, holding it, when accelerated to the right speed, computing can make the phase transition to mental." It should be clear that it is not a counterculture but only a hoc ad speculation ( like the view that it's all just a matter of raising the degree to the right 'complexity'. ""

Searle argues that his critics also rely on intuition, but his opponent's intuition has no empirical basis. He writes that, in order to consider "system reward" as plausible, one must be "under the grip of an ideology". The reply system only makes sense (for Searle) if one assumes that any "system" can have awareness, only based on a system with correct behavior and functional parts. This assumption, he argues, can not be sustained given our conscious experience.

Other thoughts and zombies: no meaning

Some replies argue that Searle's argument is irrelevant because of his assumption of wrong thoughts and consciousness. Searle believes that humans directly experience the consciousness, deliberation, and nature of their minds every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose reality and be able to know mentally." These questions answer whether Searle is justified in using his own conscious experience to determine that it is more than mechanical symbol processing. In particular, other minds respond that we can not use our experience of consciousness to answer questions about other thoughts (even computer minds), and the epiphenomena answer holds that Searle's consciousness does not "exist" in the sense that Searle thinks it is true.

Other thoughts reply
This answer shows that Searle's argument is a version of another person's mind problem, applied to the machine. It is impossible to determine whether the subjective experience of others is the same as our own experience. We can only study their behavior (that is, by giving them our own Turing test). Critics of Searle argue that he holds a Chinese room with a higher standard than we would hold an ordinary person.

Nils Nilsson writes, "If a program behaves as if it multiply, most of us would say that it is, in fact, multiplying.For all I know, Searle may just behave as if- he thinks deeply about this, but even though I disagree with him, his simulation is pretty good, so I want to reward him with a real thought. "

Alan Turing anticipated Searle's argument (which he called "The Argument from Consciousness") in 1950 and made others respond. He notes that people never consider other mind problems when dealing with each other. He writes that "instead of arguing over and over, there is usually a polite convention everyone thinks about." The Turing test only extends this "polite convention" to the machine. He does not mean to solve other mind problems (for machines or people) and he does not think we need to.

Epiphenomenon/zombie reply
Some philosophers argue that consciousness, as Searle explains, does not exist. This position is sometimes referred to as eliminative materialism: the view that consciousness is a property that can be reduced to a rigorous mechanical description, and that our conscious experience is, as Daniel Dennett explains, "user illusion".
Stuart Russell and Peter Norvig argue that, if we accept Searle's description of intentionality, consciousness and thought, we are forced to accept that consciousness is epiphenomenal: that it "does not form a shadow", that it is not detected in the outside world. They argue that Searle must be misunderstood about "the ability to understand mentally," and in his belief that there are "causal qualities" in our thought-provoking neurons. They show that, with Searle's own description, these causal properties can not be detected by anyone outside the mind, otherwise China Space can not pass the Turing test - the people outside will be able to say there is no Chinese Speaker in the room by detecting their causal nature. Since they can not detect causal properties, they can not detect mental existence. In short, Searle's "causal attributes and consciousness itself can not be detected, and anything that can not be detected does not exist or is unimportant.

Daniel Dennett gives this extension to the argument "epiphenomena".

Dennett's answer of natural selection
Suppose that, with some mutations, a human is born who does not have Searle's "causal attributes" but still acts exactly like humans. (Such animals are called "zombies" in mind experiments in the philosophy of mind). This new animal will reproduce just like any other human and eventually there will be more of these zombies. Natural selection will support zombies, because their design (we can guess) is a bit simpler. Eventually man will die. Therefore, if Searle is right, most likely humans (as we see now) are actually "zombies," who insist that they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we will still believe that we are not.

Searle disagrees with this analysis and argues that "the study of the mind begins with facts such as that humans have faith, while thermostats, telephones, and machines add not... what we want to know is what distinguishes the mind from thermostats and hearts. "He considers it as clear that we can detect the existence of consciousness and reject these answers as unimportant.

Newton blazing laser sword
Mike Alder argues that the whole argument is frivolous, because it is not positivistic: not only is the distinction between simulating the mind and having an obscure mind, but also irrelevant as it does not exist experiments, or even can be, proposed to distinguish the two.

Traditional Chinese Room Interior, Suzhou Stock Image - Image of ...
src: thumbs.dreamstime.com


In popular culture

The Chinese space argument is a central concept in the Peter Watts novel Blindsight and (to a lesser extent) Echopraxia . It is also a central theme in Virtue's Last Reward video game, and deals with game narratives. In Season 4 of the American crime drama Numb3rs there is a brief reference to the Chinese space.

The Chinese Room is also the name of an independent English video game development studio known for working in experimental first-person games, such as Everybody's Gone to the Rapture , or Dear Esther .

The Chinese Room
src: taj.tajhotels.com


See also

  • New behavior
  • No real Scotsman
  • A philosophical zombie

Private Dining | Private Dining in Co Cork | Longueville House Hotel
src: www.longuevillehouse.ie


Note


Chinese Gallery, China Gallery, Room 88, Schloss Charlottenburg ...
src: c8.alamy.com


Quote


Cartoon Vector Illustration Interior Chinese Room With Separated ...
src: thumbs.dreamstime.com


References


Decorating Ideas: Awesome Chinese Room Dividers For Living Room ...
src: neytri.net


Further reading

  • "China Chamber Argument". Encyclopedia of Internet Philosophy .
  • China Chamber of Arguments, part 4 of interview 2 September 1999 with Searle Philosophy and Critical Thinking Habits in Conversations With History series
  • Understanding Chinese Space, Mark Rosenfelder
  • The denial of the "Chinese Chamber Room" John Searle, by Bob Murphy
  • Kugel, P. (2004). "Chinese space is a hoax". Behavioral and Brain Sciences . 27 . doi: 10.1017/S0140525X04210044. Ã, , PDF on the author's homepage, important papers based on the assumption that CR can not use its input (which is in Chinese) to change its program (which is in English).
  • Wolfram Schmied (2004). "Destroying Chinese Chamber Searle". arXiv: cs.AI/0403009 [cs.AI].
  • John Preston and Mark Bishop, "View to the Chinese Room", Oxford University Press, 2002. Includes chapters by John Searle, Roger Penrose, Stevan Harnad and Kevin Warwick.
  • Margaret Boden, "Escape from the Chinese Room", Cognitive Science Research Paper No. CSRP 092, University of Sussex, School of Cognitive Science, 1987, OCLCÃ, 19297071, PDF online, "excerpts from chapters" in the next section of the unpublished "Mind of Computer Model": Ã,: Computational Approach in Theoretical Psychology ", ISBNÃ, 0- 521-24868-X (1988); reprinted in Boden (ed.) "The Philosophy of Artificial Intelligence" ISBN: 0-19-824854-7 (1989) and ISBN: 0-19-824855-5 (1990); Boden "Artificial Intelligence in Psychology: Interdisciplinary Essay" ISBN: 0-262-02285-0, MIT Press, 1989, chap. 6; reprinted in Heil, pp. 253-266 (1988) (probably summarized); J. Heil ( ed.) "Philosophy of Mind: Guidance and Anthology", Oxford University Press, 2004, page 253-266 (same version as in "Artificial Intelligence in Psychology")
  • John R. Searle, "What You Can not Know Computer" (review of Luciano Floridi, Fourth Revolution: How the Infosphere Reorganized the Human Reality , Oxford University Press, 2014 and Nick Bostrom, Superintelligence: Roads, Dangers, Strategies , Oxford University Press, 2014), The New York Review of Books , vol. LXI, no. 15 (October 9, 2014), p. 52-55.

Source of the article : Wikipedia

Comments
0 Comments