Designing androids

Millions of pounds are being spent on android research. At present, the products of all this effort are prototypes that can do things like walking, climbing stairs, grasping objects, juggling, recognising objects and dancing, but the ultimate goal of all this work is the production of a humanoid robot that can interact meaningfully with people. An android capable of such interaction would need to have a similar amount of information to the humans it interacts with and would have to be capable of acquiring more information daily. Roboticists, however, work in a scientific and philosophical tradition which sees perception as being central to belief-formation and which marginalises testimony and tradition. The assumption about the centrality of perception prevents the ultimate goal of android research ever being achieved. Most of the knowledge that a person has has been obtained by believing what other people say and what they have written. How we evaluate the assertions we encounter and decide whether or not to accept them is a subject that epistemologists say practically nothing about. Before we can program an android to learn from testimony and tradition, we must first understand how humans learn in this way. My programmatic proposal is that our acceptance of others' assertions is governed by the defeasible rule to believe them. This simple-sounding rule is surprisingly fertile, because it forces us to consider the cases in which it is overridden. The way in which we learn from others proves to be exceptionally complicated when considered epistemologically.

Since writing "Designing Androids" I have continued to develop my two-mode theory of testimony and also my ideas on why machines will not become intelligent in the foreseeable future.

Reference

  • Antoni Diller, "Designing Androids", Philosophy Now, ISSN 0961 5970, number 42 (July/August 2003), pp. 28–31. Subscribers to Philosophy Now can read "Designing Androids" online.

© Antoni Diller (18 November 2014)