Talk:Simon's Rock College tests Alan Turing theories with 'Imitation Game' experiment

Fascinating the "Turing Test" has withstood the test of time. And why not? -Edbrown05 06:26, 19 Apr 2005 (UTC)
 * The question is, why were they using legacy chat software instead of jabber? ~The bellman | Smile  01:17, 20 Apr 2005 (UTC)

well, it has been done before by some friends of mine (p.s. i hope i am not breaching wiki etiquete, but this is what the discuss is for isn't it..)
 * Excellent spot for it, and no, you're not breaching wikiquette. - Amgine



and if you've got the subscription ,here,

http://ejournals.wspc.com.sg/ijmpc/15/1508/S0129183104006522.html


 * Although the experiments are based on the same Turing paper, the Simon's Rock students appear to have followed through on Turing's idea of using computer programs to answer the judge's questions, thus allowing the judge to contrive a question on the spot. The other paper used a set of survey questions, and did not address the question of whether a non-human would be considered male or female.
 * They are both excellent research projects, but I'm sure you'll agree they measured different things and, imo, the Simon's Rock group may have stayed nearer the original experiment. - Amgine

This is simply wrong
The Simon's Rock 'experiment' is most certainly not the imitation game. It is a pathetic attempt to 'dumb down' the Turing Test so that Dr. Wallace's creation, ALICE, can appear smarter.

Perhaps a better name for it might be "The Idle Chatter Game."

Turing was quite explicit in describing the 'original' imitation game. Turing wrote: - "It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:

"My hair is shingled, and the longest strands are about nine inches long."

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

Alan Turing "Computing and Machine Intelligence"

Note well what Turing wrote: "The object of the game for the interrogator is to determine which of the other two is the man and which is the woman."

Note that in the "original" imitation game the judge is quite aware of what is going on; that he or she must choose one of the two as the woman, and that the judge must ask the best questions he or she can think of and must evaluate the answers as best he or she can. In the Turing Test one simply replaces one of the humans with a machine.

Turing is describing what is techically known as "The Method of Paired Comparisons." This requires that a judge be forced to chose one of two options. It is a very powerful measurement technique.

In the "original" imitation game the object is to chose the woman from the pair man/woman. In the "Turing Test" it is to chose the human from the pair human/machine. If the judge can't tell, then operationally we may say that the computer is doing whatever the human is doing, ie. acting "intelligent."

The essential points of the test are:(A) there are three entities, (1) a human (2) a machine and (3)a judge (B) the judge knows that there is a test in progress, and (C) the human must convince the judge that he or she is the human while the computer must convince the judge that it is the human. The sex of the human doesn't matter - at one point Turing discusses playing the game with a blind man.

When conducted as a "Paired Comparison" the test properly and sensibly tests the "intelligence" of the computer by comparing it to the "intelligence" of a human. When conducted in a manner such that the judge is not forced to make a paired comparison between the two entities, human and machine, the exercise becomes meaningless.

The Simon's Rock interpretation of the "imitation game" is doltish; it renders the affair meaningless, and Turing was not a dolt.


 * - Amgine 17:50, 21 Apr 2005 (UTC)


 * I don't know if calling this the "Simon's Rock interpretation" is really fair - Cameo has done a fantastic job putting all of this together, but she is an undergraduate, not a faculty member, and her views do not represent all of us at Simon's Rock who are interested in the Turing Test. Personally I think that this interpretation of the Turing Test is incorrect for a number of reasons, and I am not the only one who thinks so. From the Stanford Encyclopedia of Philosophy's article on The Turing Test:


 * 3.1 Interpreting the Imitation Game


 * Turing (1950) introduces the imitation game by describing a game in which the participants are a man, a woman, and a human interrogator. The interrogator is in a room apart from the other two, and is set the task of determining which of the other two is a man and which is a woman. Both the man and the woman are set the task of trying to convince the interrogator that they are the woman. Turing recommends that the best strategy for the woman is to answer all questions truthfully; of course, the best strategy for the man will require some lying. The participants in this game also use teletypewriter to communicate with one another -- to avoid clues that might be offered by tone of voice, etc. Turing then says: “We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? (434)
 * Now, of course, it is possible to interpret Turing as here intending to say what he seems literally to say, namely, that the new game is one in which the computer must pretend to be a woman, and the other participant in the game is a woman. (See, for example, Genova (1994), and Traiger (2000).) And it is also possible to interpret Turing as intending to say that the new game is one in which the computer must pretend to be a woman, and the other participant in the game is a man who must also pretend to be a woman (cf. Sterrett (2000)). However, as Copeland (2000), Piccinini (2000), and Moor (2001) convincingly argue, the rest of Turing's article, and material in other articles that Turing wrote at around the same time, very strongly support the claim that Turing actually intended the standard interpretation that we gave above, viz. that the computer is to pretend to be a human being, and the other participant in the game is a human being of unspecified gender. Moreover, as Moor (2001) argues against the claims of Sterrett (2000), there is no reason to think that one would get a better test if the computer must pretend to be a woman and the other participant in the game is a man pretending to be a woman (and, indeed, there is some reason to think that one would get a worse test). Perhaps it would make no difference to the effectiveness of the test if the computer must pretend to be a woman, and the other participant is a woman (any more than it would make a difference if the computer must pretend to be an accountant and the other participant is an accountant); however, this consideration is simply insufficient to outweigh the strong textual evidence that supports the standard interpretation of the imitation game that we gave at the beginning of our discussion of Turing (1950).

Getting Straight about the OIG Test versus the Standard Turing Test
In the post of 21 April 2005, a writer quotes the encyclopedia article “Turing Test” from the Stanford Encyclopedia of Philosophy (SEP). Unfortunately, one of the statements in the encyclopedia article quoted: “ Moreover, as Moor (2001) argues against the claims of Sterrett (2000). . . .“ is not accurate. Here is the real story about that:

--- In Moor (2001), Moor (incorrectly) wrote that I appealed to “the existence of a control group” and argued for a “gender interpretation” of the test. However, my essay “Turing’s Two Tests for Intelligence” (Sterrett (2000)) doesn’t say anything about a control group, and I have never made such a statement orally, either. When Moor sent me an advance copy of Moor (2001) as a courtesy, I called Moor to ask him why he had attributed that statement to me; it turned out he was recalling a statement made instead by someone else at the Dartmouth conference in January 2000. It was too late for Moor to make changes to his article by that time, but he very graciously invited me to submit a reply to his paper to the journal. I did so, and my reply to Moor appeared in Minds and Machines as Sterrett (2001), “Nested Algorithms and the Original Imitation Game Test: A Reply to James Moor”. Another clarification I made there was that “on my view, the issue is NOT a matter of using a ‘gender interpretation’ in contrast to a ‘human interpretation’”.

--- Subsequently, in the entry on “The Turing Test” in the Stanford Encyclopedia of Philosophy, its authors (G. Oppy and D. Dowe) repeated Moor’s dismissal uncritically and did not include my reply to Moor among the references. I wrote to the authors, telling them that Moor’s dismissal of my view in Moor (2001) was based on things I never said, and supplied each of them with hard copies of my papers “Turing’s Two Tests for Intelligence” and “ Nested Algorithms. . . “ They agreed to read the papers and to update the article accordingly. That was several years ago.

--- I urge the reader who really wants to see how I defined the OIG Test (”Original Imitation Game Test”), to understand the reasons I say that the OIG Test is not subject to the criticisms of the STT (”Standard Turing Test”) and my claim that gender distinctions are not essential to the OIG Test, to actually read my articles (referenced below) rather than uncritically accept these baseless dismissals of them. In fact, I do not know of any real criticisms of the specific points summarized in Figure 1 of Sterrett (2000) that I made in support of my claims about the superiority of the OIG Test. Another important claim I make about the OIG Test is that it realizes a possibility that philosophers have overlooked: a test that uses a human performance in establishing a test indicating intelligence but does not make similarity to that human performance the criterion of passing the test.

Further, I explicitly said that I was not making any historical claims about what test Turing had in mind, so attempts to put me on one side or the other of that debate are misguided. I think there is evidence for both the OIG Test and the STT in Turing’s paper. The way my paper relates to experiments such as the Simon’s Rock College experiment is that it explains the significance of the difference between the two different interpretations of the test, and so explains why actually carrying out an OIG Test would be a significant event.

--- I also ask writers not to repeat the Stanford Encyclopedia of Philosophy’s repeating of Moor’s dismissal of my paper based on his criticism of a claim I never made.

Posted by Susan G. Sterrett, Assistant Professor, Dept. of Philosophy, Duke University. June 19, 2005. sterrett@duke.edu  http://fds.duke.edu/db/aas/Philosophy/faculty/sterrett

References:

Moor, James H. “The Status and Future of the Turing Test’, Minds and Machines v. 11, pp. 79 - 93.

Sterrett, Susan G. (2000) "Turing's Two Tests for Intelligence" Minds and Machines v.10. pp. 541-559. (reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence and edited by James H. Moor, Kluwer Academic 2003).

Sterrett, Susan G. (2002) "Nested Algorithms and the Original Imitation Game Test” Minds and Machines v. 12, pp. 131-136.