[link]
Summary by Abhishek Das 7 years ago
- Turing, in his MIND paper in 1950, proposed an operational, behavioral alternative to the philosophical question "Can machines think?” by suggesting a simple "Turing test" where machines play the "imitation game” and humans are tasked with discerning machine from human given responses. He believed even partial success towards this goal given only 5 minutes of interaction would be hard and far-off.
- The Turing test hasn’t yet been met (except in restricted settings like Siri, Watson), but his prediction "one will be able to speak of machines thinking without expecting to be contradicted” has proved true — "smart” computers have become commonplace.
- One of the reasons Turing test hasn’t been met yet is because of the failures today’s intelligent systems make. Their capabilities are limited as type of questions they can handle, domains and their ability to handle unexpected input. Failure cases when "it doesn’t know that it doesn’t know” making humans exclaim how stupid it is.
- There is a realization that computers and humans have separate strengths, weaknesses and roles. Also, language is inherently social and connected to communicative purpose and human cooperation. It is intentional behavior, and not just stimulus-response. Language also assumes that participants have models of each other, models that influence what they say and how they say it. Retrospectively speaking, Turing’s imitation game misses these aspects. "Jeopardy” was clever in avoiding dialogue context and modeling other people’s behavior.
- Another big change: instead of input-output interactions with computers by humans, today humans + computers exist in "mixed” networks.
- Desirable properties in today's Turing test: interactive nature + use of language in real use (than success in game) + human-machine collaboration
- Proposed Turing test: "Is it imaginable that a computer (agent) team member could behave, over the long term and in uncertain, dynamic environments, in such a way that people on the team will not notice it is not human.”
- This doesn’t ask machine to appear like human, act or be mistaken for one, but non-humanness shouldn’t hit people in the face. Behavior shouldn’t baffle teammates, leaving them wondering not about what it is thinking but whether it is. Such a system will also need a model of teammates’ knowledge, abilities, preferences, etc.
more
less