personalities of the two competing corporate chieftains in these contrasting approaches. At Apple, Steve Jobs saw the potential in Siri before it was even capable of recognizing human speech and focused his designers on natural language as a better way to control a computer. At Google, Larry Page, by way of contrast, has resisted portraying a computer in human form.
How far will this trend go? Today it is anything but certain. Although we are already able to chatter with our cars and other appliances using limited vocabularies, computer speech and voice understanding is still a niche in the world of “interfaces” that control the computers that surround us. Speech recognition clearly offers a dramatic improvement in busy-hand, busy-eye scenarios for interacting with the multiplicity of Web services and smartphone applications that have emerged. Perhaps advances in brain-computer interfaces will prove to be useful for those unable to speak or when silence or stealth is needed, such as card counting in blackjack. The murkier question is whether these cybernetic assistants will eventually pass the Turing test, the metric first proposed by mathematician and computer scientist Alan Turing to determine if a computer is “intelligent.” Turing’s original 1951 paper has spawned a long-running philosophical discussion and even an annual contest, but today what is more interesting than the question of machine intelligence is what the test implies about the relationship between humans and machines.
Turing’s test consisted of placing a human before a computer terminal to interact with an unknown entity through typewritten questions and answers. If, after a reasonable period, the questioner was unable to determine whether he or she was communicating with a human or a machine, then the machine could be said to be “intelligent.” Although it has several variants and has been widely criticized, from a sociological point of view the test poses the right question. In other words, it is relevant with respect to the human, not the machine.
In the fall of 1991 I covered the first of a series of Turing test contests sponsored by a New York City philanthropist, Hugh Loebner. The event was first held at the Boston Computer Museum and attracted a crowd of computer scientists and a smattering of philosophers. At that point the “bots,” software robots designed to participate in the contest, weren’t very far advanced beyond the legendary Eliza program written by computer scientist Joseph Weizenbaum during the 1960s. Weizenbaum’s program mimicked a Rogerian psychologist (a human-centered form of psychiatry focused on persuading a patient to talk his or her way toward understanding his or her actual feelings) and he was horrified to discover that his students had become deeply immersed in intimate conversations with his first, simple bot.
But the judges for the original Loebner contest in 1991 fell into two broad categories: computer literate and computer illiterate. For human judges without computer expertise, it turned out that for all practical purposes the Turing test was conquered in that first year. In reporting on the contest I quoted one of the nontechnical judges, a part-time auto mechanic, saying why she was fooled: “It typed something that I thought was trite, and when I responded it interacted with me in a very convincing fashion,” 5 she said. It was a harbinger of things to come. We now routinely interact with machines simulating humans and they will continue to improve in convincing us of their faux humanity.
Today, programs like Siri not only seem almost human; they are beginning to make human-machine interactions in natural language seem routine. The evolution of these software robots is aided by the fact that humans appear to want to believe they are interacting with humans even when they are conversing with machines. We are hardwired for social interaction. Whether or not robots move around to assist us in the