16 July 2006

This paper demands a bit of imagination, because the issue under discussion is not yet an issue at all. It is the debate over what constitutes personhood, and whether or not created beings, that is, machines, can ever be considered as such. Stop for a moment and imagine these futuristic scenes: The bridge of the Starship USS Enterprise, on "Star Trek: the Next Generation". A bald man sits in the Captain's chair; at his right, a bearded man, and a woman on his left. A pale, clean–cut man is at the helm. Or the final scene in the movie "Blade Runner". A man and his lover, a pale woman, are driving North out of Los Angeles. The helmsman in the first scene is Lieutenant Commander Data; Rachael is the name of the woman in the second. They look, think, and act like people; as far as we can tell, or are led to believe, they are persons. Yet they are not humans, or even naturally occurring life forms. They are machines: Data is an android, Rachael, a replican. For Data and Rachael, this is very much an issue. In one episode, Data, after twenty–six years of exemplary service, was given temporary command of the Starcruiser Sutherland. His human first officer refused to acknowledge his authority because he was not a biological life form. Replicans, on the other hand, were created as a source of inexpensive (slave) labor; when the replicans rebelled, their "retirement" was contracted for, although the very act of rebellion would seem to prove their personhood, thus making extermination, murder. Of course, "Blade Runner" is set in 2021, and "Star Trek" in the twenty–fourth century. No machines currently approach this level of sophistication. However, if we may assume that the field of Artificial Intelligence will continue to advance, it is quite conceivable that one day they may.

Since the integration of the diverse branches of Artificial Intelligence, which either stressed reasoning, learning, and symbolic processing systems, or perception and reaction, researchers have been trying to build mechanical creatures that could function and survive in the real world, thus incorporating mechanical perception, automated reasoning, natural language understanding, planning, and knowledge representation in various combinations (Wallich, pp.125–126). As of yet, no one has built a machine that will survive on its own for more than a few hours, or with even the intelligence of a mayfly (Wallich, p.126), but several machines have resulted that are worthy of note. Thomas Dean has designed systems that show second order intentionality, beliefs about beliefs, by planning how much time should be spent planning an action (Wallich, p.130), while SOAR uses a techniques called chunkng to learn how to solve problems. SOAR also has natural–language capabilities and sensory modules. Ultimately, these will be incorporated into a robot that can take, and answer, English commands and carry out the orders (Wallich, pp.130–131).

Another interesting project is Cyc, a machine full of facts which will soon turn to finding information on its own. It is designed to have the kind of knowledge that an intelligent agent would need to preform its tasks; it, however, is little more than a gigantic database. And while it knows that there is a thing called Cyc, and that Cyc is a computer program, it does not have self–consciousness: Cyc does not know that "it" is Cyc (Wallich, pp.132–134). While these machines are not things we would intuitively grant personhood, they show that Artificial Intelligence is moving in that direction.

A breakthrough may occur when researchers refine parallel distributive processing. This form of information processing is modelled after the human brain, and could possibly allow for faster processing in computers: instead of running a number of calculations through the same series of circuits to arrive at an answer, many different calculations could be performed simultaneously by interconnected circuits, thus allowing a quicker response than waiting for them all to go through the same circuits would provide (Churchland, pp.156–165). This might provide just the boost that systems based on reaction to the environment need: by considering many factors at once, instead of individually, reaction time would decrease, and their chances for survival (that is, not being stumped by the situation) would increase.

Yet there may be some who would object that, no matter how much like a person machines may be, they can never be persons because they are machines. Aside from begging the question, this response implies that machines cannot be persons because they are programmed: they are not free, as we are, to choose what they will do. Instead, they must respond the way they were designed to, even though, in the "Star Trek" episode mentioned above, Data displayed insubordination by acting on his own assessment of the situation rather than obeying the Captain's orders, which is what all officers are supposed to––are "programmed" to––do. This is not, however, a valid point for objection: Searle demonstrates that we (humans) are not "free," either––yet we do not doubt ourselves to be persons. His argument is that radical freedom, which allows the mind to play a role in changing the course of events as they would otherwise happen, is incompatible with the deterministic physical world science has exposed; nonetheless, he admits, we "experience" freedom (pp.86–88). We know, from personal experience, that when we voluntarily act in a certain way, other options were open to us. We were not compelled to act in that way; we chose it freely. The basis of this sense of freedom comes from conscious action: to act consciously, and not experience freedom, would be impossible. In the Penfeild experiment, for instance, one is conscious and aware of what is happening, but is not free: electrical stimulation causes the action. We are not free at this point because we have no control; we could not do otherwise. Yet this passivity is not experienced in voluntary actions: the feeling of freedom is an innate part of acting; otherwise, we would not be acting, but acted through (pp.94–95). This argument is, however, an appeal to ignorance. Just because an act was not coerced or did not have observable causes does not mean that it was free. It merely means that the causes were not recognized. And indeed, as rational beings, it is the case that some of these causes are our thoughts. We must simple recognize that these, too, are in turn caused, not independent (Dennett, p.247).

Yet, says Searle, we cannot give up this mistaken view of ourselves as free, the way we gave up the idea that the sun rises after Copernicus showed that this perception is caused by the Earth's rotation, because the notion of determinism––that everything we do is caused––does not adequately describe the experience we have in acting out these causes, as explained above. Reality is that we are completely determined, but we, perhaps as a result of an evolutionary development of the very structure of our consciousness, perceive ourselves as free (pp.95–97).

This lack of freedom, though, of course does not mean that we are not persons; Dennett makes this clear with his treatment of stances: design, physical, intentional, and personal. Each of these is a way of responding to some other thing; a way of predicting and explaining its behavior, save for the personal stance, which implies moral considerations and presupposes the intentional stance. The first three can each be applied, to some extent, to everything: the design stance explains in terms of what X is designed to do(a chair is supposed to hold a person), the physical stance explains in terms of what state it is actually in(the clock is unplugged), and the intentional in terms of beliefs, desires, and other "mental states(she wanted the alarm to go off)." We should use the stance which is most effective for each X: if X is a tree or a chair, the design stance will work perfectly well; if X is a soda machine, car, or clock, the physical stance is probably best, and if X is a human or a chess–playing computer, the intentional is most likely needed. This shows that just because everything can be explained and predicted in terms of design, or the physical causes leading to a result, does not mean that this is necessarily the best way: for computers and humans, it would be heinously cumbersome. Indeed, whenever the intentional stance is the most effective for explaining and object, that object is an intentional system, regardless of whether it actually has beliefs and desires or can be explained in another way (pp.233–238).

The personal stance and its moral consideration presupposes the intentional stance: the intentional stance incorporates the first three conditions of personhood (in a metaphysical sense. Personhood in a moral sense is dependant upon personhood in the metaphysical sense, and thus the personal stance should only be adopted towards persons in the metaphysical sense), which are that a person is a rational being, intentional predicates (beliefs, desires, and so on) can be ascribed to it, and it is treated as such: that is, the intentional stance is adopted toward it. Thus, the personal stance presupposes the intentional. The forth condition, though, is not met by all intentional systems: the object of the intentional stance must be capable of reciprocating, or considering and treating the system taking an intentional stance toward it as intentional. The fifth condition is that it be capable of verbal communication, and finally, it must be, in some way, self–conscious. Each of these requirements is necessary, but not of itself sufficient, for personhood (pp.268–270).

Many, if not all, things can meet these first three conditions: we can say that a sunflower turns because it wants light, or that a car stalls because it doesn't like, and thus doesn't want to, climb steep hills. However, the number of systems that meet the forth requirement, that of having beliefs and desires about beliefs and desires, or about another system having beliefs and desires, is much smaller (pp.273–276). Perhaps, among non–humans, only Dean's program that works out how much time to spend planning a response currently has these second–order intentions (although perhaps some animals do, too): it(behaves as though it) believes that it should spend an appropriate time deciding how much time it believes is appropriate for solving a problem, rather than just solving the problem. And even this machine does not take the intentional stance toward others. It acts as if it has beliefs about its own beliefs, but does not attribute beliefs to others.

However, by making verbal communication an additional requirement of persons; by requiring that the speaker intend the hearer understand that the speaker intends for the hearer to understand what is said, we necessitate third–order intentions and remove any beings that do not use language from the list of persons (which is, so far as we know, all but humans). This is not simply an arbitrary move to keep from considering other beings as persons; third–order intentions of this nature are needed for a communication encounter to have meaning (unless I understand that you mean for me to understand, I do not understand). Without this meaning, one cannot give or listen to reasons, and without reasons one cannot be argued into or out of an action or attitude, thus exhibiting a distinct lack of the rationality attributed to all intentional systems. And if a system is not intentional, it is not a candidate for personhood (pp.277–283).

Finally, the requirement of self–consciousness does not only mean that the system is aware of itself as the system, but can apply the communication of condition five reflexively. A person, then, is able to engage in conscious dialogue with itself, rationalize with itself, and persuade itself to do things, develop desires, adopt attitudes, and hold beliefs (pp.284–285).

Admittedly, there seems to be nothing, either biological or mechanical, outside of humans, that currently meets all of these conditions. But consider Data and Rachael. Data wants to be more human; he obviously meets the sixth condition by being able to convince himself that he wants something. Rachael did not even know until halfway through the movie that she was a replican, and she cried when she learned this, so she, in the same way as Data, also meets the sixth condition. Of course, these are fictional examples, but it is conceivable that we will eventually produce such mechanical beings, and the question remains as to what we should, and in actuality will, do if and when that time comes. It seems obvious that we will have to adopt the personal stance toward such beings; in fact, there is no choice. They will demand it.

0 Comments:

Post a Comment

<< Home