Skepticism About Passing The Turing Test

Computer people have been all abuzz recently because a program called Eugene Goostman supposedly recently passed the Turing Test. The Turing Test is supposed to be a big deal in artificial intelligence, a test where a computer program supposedly can’t be distinguished from a real person. However, Wired and others claim to not be impressed by Goostman, asserting that the program got lucky, had an advantage in claiming to be a 13-year-old non-English speaker from Ukraine, and still only fooled people about 30 percent of the time.


However, regardless of whether or not Goostman actually achieved something here, I think question exist about what the Turing Test measures in general given the fact that the main test is fooling people. I mean, is this a measure of a computer program’s sophistication or a measure of the intelligence/gullibility of the people who attempt to talk to the computer program? Are there any standards required for how intelligent and/or gullible the people to whom the computer program attempts to talk have to be?


Absent such, it seems that the test is more dependent upon the human participant than the computer program itself. Given how gullible some people are, it might not measure much about the computer program at all. After all, no matter how outrageous satire articles get these days it seems like somebody ends up mistaking it for real. Is this kind of human participant measured test even meaningful then?


Just a thought.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2014 17:00
No comments have been added yet.