Mark D. White

Writer, editor, teacher

Mark D. White

Thanks to Larry Solum's Legal Theory Blog, I became aware of F. Patrick Hubbard's new paper "'Do Androids Dream?': Personhood and Intelligent Artifacts," forthcoming in Temple Law Review, which considers the issue of granting the status of personhood to an artificial intelligence:

This Article proposes a test to be used in answering an important question that has never received detailed jurisprudential analysis: What happens if a human artifact like a large computer system requests that it be treated as a person rather than as property? The Article argues that this entity should be granted a legal right to personhood if it has the following capacities: (1) an ability to interact with its environment and to engage in complex thought and communication; (2) a sense of being a self with a concern for achieving its plan for its life; and (3) the ability to live in a community with other persons based on, at least, mutual self interest. In order to develop and defend this test of personhood, the Article sketches the nature and basis of the liberal theory of personhood, reviews the reasons to grant or deny autonomy to an entity that passes the test, and discusses, in terms of existing and potential technology, the categories of artifacts that might be granted the legal right of self ownership under the test. Because of the speculative nature of the Article's topic, it closes with a discussion of the treatment of intelligent artifacts in science fiction.

Skimming through this fascinating paper, I am especially grateful for the extended treatment (pp. 82-88) of Isaac Asimov and his conception of robotic artificial intelligence from his R. Daneel Olivaw novels (as well as his many short stories on robots), a longtime devotion of mine. (Did reading about the Three Laws of Robotics lead to my embrace of Kant later in life? Who knows…)

Posted in , , , , , ,

2 responses to “On Artifical Intelligence and Personhood (with thanks to Isaac Asimov)”

  1. Muireann Quigley Avatar

    This sounds like a really interesting paper. I can’t seem to get hold of it at the moment,but when I do I’ll be interested to see if the author thinks there is a difference between a computer system which has been programmed to outwardly appear like it has the listed characteristics and one which somehow ‘develops’ them. I guess I’m thinking about some sort of difference between living an inauthentic versus authentic life. Such a difference may not, of course, exist, but I’ll be interested to read this author’s views. Whether such a distinction can be made might have an impact on those entities that we would want to assign legal status and legal protections to.
    I’ll also be interested to see if the author discusses the Turing test and whether he thinks it is relevant for identifying the characteristics that he is after. It may be, of course, that the Turing test is not relevant. It is controversial and does seem to set the bar for intelligence quite high. And if we are to use this as some sort of bar for artificial computers, then we might well ask why it ought not to be used for human computers.

    Like

  2. Mark D. White Avatar

    Thanks, Muireann – having just skimmed the paper myself, I’m not sure if he addresses those things, but I know Larry Solum does, especially with respect to John Searle’s Chinese room, in this paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1108671.

    Like

Leave a reply to Muireann Quigley Cancel reply