Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searle’s Chinese room experiment. Searle concocted it to convince us that computers don’t really “think” as we do; they manipulate symbols mindlessly, without understanding what they are doing.
Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics.
Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine “thinks.”
Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machine’s answers from the human’s, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end.
To read more, click here.