physical world, they are already moving among us in cyberspace. It’s now inevitable that these software bots—AIs, if only of limited capability—will increasingly become a routine part of daily life.
Intelligent software agents such as Apple’s Siri, Microsoft’s Cortana, and Google Now are interacting with hundreds of millions of people, by default defining this robot/human relationship. Even at this relatively early stage Siri has a distinctly human style, a first step toward the creation of a generation of likable and trusted advisors. Will it matter whether we interact with these systems as partners or keep them as slaves? While there is an increasingly lively discussion about whether intelligent agents and robots will be autonomous—and if they are autonomous, whether they will be self-aware enough that we need to consider questions of “robot rights”—in the short term the more significant question is how we treat these systems and what the design of those interactions says about what it means to be human. To the extent that we treat these systems as partners it will humanize us. Yet the question of what the relationship between humans and machines will be has largely been ignored by much of the modern computing world.
Jonathan Grudin, a computer scientist at Microsoft Research, has noted that the separate disciplines of artificial intelligence and human-computer interaction rarely speak to one another. 6 He points to John McCarthy’s early explanation of the direction of artificial intelligence research: “[The goal] was to get away from studying human behavior and consider the computer as a tool for solving certain classes of problems. Thus AI was created as a branch of computer science and not as a branch of psychology.” 7 McCarthy’s pragmatic approach can certainly be justified by the success the field has had in the past half decade. Artificial intelligence researchers like to point out that aircraft can fly just fine without resorting to flapping their wings—an argument that asserts that to duplicate human cognition or behavior, it is not necessary to comprehend it. However, the chasm between AI and IA has only deepened as AI systems have become increasingly facile at human tasks, whether it is seeing, speaking, moving boxes, or playing chess, Jeopardy! , or Atari video games.
Terry Winograd was one of the first to see the two extremes clearly and to consider the consequences. His career traces an arc from artificial intelligence to intelligence augmentation. As a graduate student at MIT in the 1960s, he focused on understanding human language in order to build a software equivalent to Shakey—a software robot capable of interacting with humans in conversation. Then, during the 1980s, in part because of his changing views on the limits of artificial intelligence, he left the field—a shift in perspective moving from AI to IA. Winograd walked away from AI in part because of a series of challenging conversations with a group of philosophers at the University of California. A member of a small group of AI researchers, he engaged in a series of weekly seminars with Berkeley philosophers Hubert Dreyfus and John Searle. The philosophers convinced him that there were real limits to the capabilities of intelligent machines. Winograd’s conversion coincided with the collapse of a nascent artificial intelligence industry known as the “AI Winter.” Several decades later, Winograd, who was faculty advisor for Google cofounder Larry Page at Stanford, famously counseled the young graduate student to focus on the problem of Web search rather than self-driving cars.
In the intervening decades Winograd had become acutely aware of the importance of the designer’s point of view. The separation of the fields of AI and human-computer interaction, or HCI, is partly a question of approach, but it’s also an ethical stance about designing humans either into or out of the systems we create. More recently at