18 July 2006

Armstrong, Nancy. Desire and Domestic Fiction: A Political History of the Novel. Oxford: Oxford University Press, 1987.

Armstrong follows Foucault in noting a change in sexuality coincidental with the rise of the bourgeois. She, however, feels that this change created a new, feminine form of power: women were responsible for ordering private life, which included everything not business or politics. Sex was not 'repressed,' but 'domesticated'; Armstrong reads the h[er]story of women(not business or politics) in novels by, for, and about women, and claims that this history of sexuality has as much relevance and influence as the more familiar patriarchal economic history. The middle-class woman, as arbiter of social standards, wielded a vast but unrecognized power.


Brower, Reuben A. "From the Iliad to Jane Austen, via The Rape of the Lock." Jane Austen: Bicentenary Essays. ed John Halperin. Cambridge: Cambridge University Press, 1975.

This article may provide nothing more than a footnote: it draws an explicit connection between Austen and the Greeks by showing stylistic and thematic similarities between Austen and Pope, who translated the Homeric epics and provided England with its last great does of Greek ideology. While such a connection is fortunate, it is not the connection I seek.

Foucault, Michel. The History of Sexuality. trans Robert Hurley. New York: Random House, 1978.

In volume one, Foucault states that his aim "is to examine the case of a society which has been loudly castigating itself for its hypocrisy for more than a century, which speaks verbosely of its own silence, takes great pains to relate in detail the things it does not say, denounces its power, and promises to liberate itself from the very laws that have made it function"(8). His subject is sex, and its relation to power; he links sexual repression to the rise of capitalism (and thus to the rise of the novel).

Volume one lays out his hypotheses (that sex was driven outside the realm of accepted discourse, thus becoming a much-discussed subject, and that 'perversion'--deviation from the marriage bed--became 'unnatural,' and thus fascinating) and a method for establishing free discourse on the subject.

The coincidence of a rising bourgeoisie repressing sexuality, however, immediately lends credence to the notion of a new value system for novels to represent.

The Persian Gulf War as Economic Imperialism

There were many justifications and rationalizations for the United States–led action against Iraq in 1991, including the moral imperative not to allow aggression. However, I would like to argue that it was essentially an act of economic imperialism. Imperialism is the acquisition of territory and suppression of its inhabitants(23 Oct), generally for economic purposes. Decolonization after World War II has ended this direct control, but there are still indirect means of exploitation. A theoretical assessment of the period following decolonization defines Neo–Imperialism as any relationship of effective domination or control, political or economic, direct or indirect, of one state over another(26 Oct). The fundamental issue here is power: the ability of one state to make another do what the first wills it to(26 Oct). In this case, oil can be equated with power, as the forming of OPEC in 1973 demonstrated. If Iraq had been successful in annexing Kuwait, it would have controlled approximately one fifth of the world's available oil resources. This would give Iraq an enormous amount of economic power; a petro–chemically dependant world would, eventually, have to meet any demands Iraq might make.

One of the assumptions Neo–Imperialists make is that the interests of business and government are closely related in Neo–Imperialistic states. If it is true in this case that both have an interest in a secure oil supply, both would also apparently have an interest in the stability of the Middle–East. While the Middle–East has traditionally been a hotbed of conflict, until 1990 the power seemed to be relatively balanced. Had the Iraqi annexation of Kuwait been successful, however, it would have greatly increased Saddam Hussein's standing, both economically and politically. His successful occupation would have not only demonstrated his willingness and his ability to do as he pleased in the region, but would also have given Arabs a leader to rally under in their conflict with Israel. These factors would have greatly increased Hussein's power in the region, perhaps enough to not only change the balance of power but even to create an Iraqi hegemony. This hegemony could have, in turn, been detrimental to the oil supply: it might have decreased the region's stability, and thus it's oil-producing capabilities, or it could have given Iraq virtual control of the entire region and allowed them to control production, however and for whatever reasons it chose.

I will begin my analysis at some point before Iraq actually moves to annex Kuwait, while this hegemony is still only a possibility. Yet this possibility would be threatening to the interests of oil–dependant first world states, which need a steady supply of inexpensive petroleum. Since Iraq, before the invasion, had the forth largest standing army in the world while Kuwait was poorly defended, if Iraq chose to make such a move it was sure to be successful. There was also a somewhat legitimate border dispute between the two countries, which increased the likelihood of an Iraqi invasion. If a Neo–Imperialist society sees such a threat, it seems reasonable that it will take steps to avoid losing control of its resources. The problem, then, becomes one of logistics: how can the Neo–Imperialists achieve the secure oil supply they need without making their intentions of interference obvious? It seems possible that the United States, acting in Neo–Imperialist fashion, saw this possibility and took steps to protect its interests by setting Iraq up to invade Kuwait, so it could be knocked down and eliminated as a threat. The border dispute provided an opportunity: the infamous state department communication that, "We don't get involved in border disputes", seemed to give Iraq a green light for their invasion. Once Iraq took that action, however, the United States led an international outcry against the violation of Kuwaiti sovereignty. This violation, which had occurred with what seems like not only full knowledge, but the blessing, of the United States, was then used as justification for United Nations sanctions and the United Nations–approved use of force against Iraq.

To call this an economically motivated act of Neo–Imperialism is to say that the United States was re–establishing its dominance in its relationship with Iraq, for the purpose of securing economic interests. That the United States proved itself dominant in this relationship is obvious from the conflict's result; that the economic goals were achieved can be surmised from the fall in gasoline prices since the conflict's end. Yet to call it an act of Neo–Imperialism, the connection between business and government interests should be made. This is a fundamental assumption of the Neo–Imperialist framework, and if it does not hold true, it is inappropriate to apply that framework. That George Bush's personal fortune was made in oil may or may not be relevant here; that United States automakers have successfully lobbied against increased mileage requirements; that the government has been unenthusiastic at best about exploring other energy options, such as hemp, corn, solar, wind, and hydro-electric power––these factors seem relevant. They seem to fit Lenin's economic perspective of Imperialism(23 Oct), which focuses on monopoly capitalism. The petro–chemical energy system, while not monopolized itself, has a monopoly on the energy market. To keep this monopoly intact, they need access to petroleum. Thus they would have a great interest in maintaining the security of Mid–Eastern resources.

This war may have occurred without such capitalistic influences, but it seems likely that if the United States had adopted Jimmy Carter's energy program, thus reducing its dependence on petroleum, they would not have developed such a close relationship, presumably designed to protect petroleum interests, with Iraq in the first place. Instead, they built its military base during other conflicts and gave Hussein the strength to become a threat. Realizing too late what it had done, the United States had no choice but to destroy that threat, or lose its power over the oil supply. Losing that power would not only hurt the United States economically, it would bring into question its political power and thus its international standing.

16 July 2006

Garis, Robert. "Learning Experience and Change." Critical Essays on Jane Austen. ed. B.C. Southam. London" Routledge and Kegan Paul, 1968.

The learning experience--character development--drives Austen's novels: when heroines grow, novels work. Key is 'sense'--seeing and behaving well. When heroines see well, behavior follows, and the emphasis is on seeing self and others as the really are and ought to be. The remainder of the paper demonstrates this theory is Sense and Sensibility, Pride and Prejudice, Mansfield Park, Emma, and Persuasion, as well as showing Austen's own growth as an author. The term sense, as Garis uses it, is nicely paralleled to my notion of being, from the Greek.

Looser, Devoney. "(Re)Making History and Philosophy: Austen's Northanger Abbey. European Romantic Review. vol 4.1(summer 1993). 34-55.

This article deconstructs the terms "history," "philosophy," and "novel" in an attempt to understand how Austen used, and related to, these concepts in her work. This provides insight into what Austen thought young women ought to study, and thus on her very hidden political agenda. Looser also presents the novel, as opposed to the conduct book, vying for readers, and offering them something accepted as a special kind of truth: truth, rather than fiction, fact, or conjecture, and deserving respect on its own terms.

Nietzsche, Friedrich. The Birth of Tragedy. trans. Francis Golffing. New York: Doubleday, 1956.

Nietzsche seems to have two major themes in this book-- that Greek tragedy was a result of the conflict between opposing ideologies, and that its decline began with the ascendance of one over the other. I think there are several connections/parallels between his description of Greek tragedy and the rise of the novel.

Nietzsche tells of the Dionysian ritual, with its use of music, and makes the claim that tragedy began with this. The music was provided by the chorus, which sang the story. The rituals of Dionysis, we are told, brought about a state like intoxication, in which the sense of individual was lost in the larger community of being; the chorus was the whole audience, and acting out the play provided an intuitive glimpse of the metaphysical belief system. Plays, at this level, were probably no more than current responsive reading rituals.

This changed with the Apollonian influence, which was the power of dream, not intoxication; the power to see clearly, as embodied in the epic, and in sculpture, and to notice, rather than lose, the individual particulars, described, rather than participated in, reality. Dionysis symbolized process, Apollo, the ideal as manifested in forms.

The blending of these two cultures brought mythology to the stage. Instead of just having a drunken camp-fire songfest, as Dionysis would, Apollo told the stories of great beings, who had lived up to the ideal despite great consequences. These stories, however, only held the stage for two generations before losing contact with the orgiastic Dionysian spirit of music which had spawned them. Apollo took over when Socrates denied that Dionysis could provide true wisdom, but suggested that, through the knowledge of particulars, Apollo could. I agree with Nietzsche and Blake, that Socrates was mistaken, as does an entire sect of Hinduism.

Reddy, T. Vasudeva. Jane Austen: The Dialectics of Self-Actualizationin Her Novels. New Delhi: Sterling Publishers Private Limited, 1987.

This book studies Austen's heroines in terms of the choices they make and the development that results from them. Austen's heroines are doing what the Greeks did, trying to realize self-fulfillment in spite of opposition from their social situations, by becoming increasingly self-aware. I think Vasudeva stole my thesis.

Todd, Janet. "Jane Austen, Politics, and Sensibility." Feminist Criticism. ed. Susan Sellers. Buffalo: University of Toronto Press, 1991.

Todd examines Austen's use of sensibility as a political too. The sensibility Austen uses is tempered by reality; her heroines learn and grow, rather than stagnantly screaming. She did not like sensibility, and used her work both to show its negative effects(e.g. Catherine Moreland) and how they could be overcome. Sentimental literature not only reinforced conservative norms of aristocracy and airheadedness, it also, in many instances, enforced the emotions which did this upon the reader involuntarily, through its narrative techniques.

Austen detested sensibility, so her work, when it does show political colors, lines up against this "feminine" notion--thus making her appear more conservative than she may in fact have been. it also mirrors reality--a patriarchal reality--well, so her heroines do marry(only Emma, I understand, makes it on her own).

The Uberdog on Animal Farm
Jack London calls forth a two-sided critical response. He is acknowledged as an admitted socialist, yet his work is also often commented upon for its fierce strain of individualism. While London's portrayal of nature in a realistic manner can be seen as something organically American, it has intellectual roots in the European philosophy of Nietzsche and Spencer.

My intention is to, using London's own work, explore the implications of these two contradictory strains in his work. Call of the Wild will serve as a basis for examining the individualism he derived from his reading; his socialistic pamphlettering provides material for examining the society he thought men should build.

These two strains of thought seem bound to clash. In a communal society, the needs of the individual are subordinate to the needs of society as a whole. This does not preclude outstanding achievement by the most gifted, but in spite of taking from each according to ability, it only rewards according to need. The individualist can no longer obey the law of club and fang, taking what is desired because the taking is possible. While such brutal measures may be necessary on the way to a Socialist state, what is then to become of them?

If Call of the Wild is read as an allegory, as it often is, we can see what happens when the superior individual, Buck, is turned loose on society. London provides two contrasting societies for Buck: the sled teams, and the wolf pack. This paper will examine how London's individualism plays out in these two setting, which correspond to capitolistic and socialistic societies. Thus, the book will provide evidence of London's sense of the individual, and of his interpretation of the individual in society.

Richard Wright's Bigger Thomas is, unquestionably, a product of his environment. He grows up in almost exactly the same neighborhood as Studs Lonigan did; we already know that the environment here is not fully nurturing. By cramming a whole family into Studs' bedroom, giving them less money and less opportunity, marking them with a social stigma even worse than being Irish, and filling the boy with a burning rage against society, Wright all but guarantees that his protagonist will end up in worse shape than Studs. The only question is how.

Bigger Thomas is a product of his environment; he does not act of his own free will. He doesn't even discover free will until after he acts. No, he doesn't plan anything--everything he does is a response. If he wants to rob a store, it is because he is bored and needs cash; if he gets into a fight with his partners that makes them miss the hold-up, it is because he is scared. Likewise, he takes a job because his family will starve if he doesn't. He kills in the same guttural way--smothering the fear of discovery and accusations with a pillow. Remember, Bigger has been trying to do his job, trying to put Mary to bed because she was too drunk to do it herself. When blind Mrs. Dalton stops by the room, he panics at the thought she might accuse him of raping Mary and stifles her voice. He is too busy worrying about Mrs. Dalton to notice when Mary stops struggling. But once Mrs. Dalton is gone and Bigger realizes what he has done, he realizes his power over the world.

Bigger Thomas is a product of his environment. When the environment presents him an opportunity to make $10,000.00, he tries to cash in. He has been taught that Communists are bad, so he tries to blame them. He thinks that Besse will get him caught, so he kills her. Now Bigger is thinking. This is slightly better than the pure reactionary responses; Bigger is aware of his power, at least. He is now aware of his ability to influence the outside world. But Bigger is still not acting of free will.

Bigger Thomas is a product of his environment. He only comes into this realization as his story ends; his conversations with Mr. Max trigger the self-reflection which is necessary for free will. Without this awareness of how he has been controlled by his environment, Bigger would never be able to act in a way other than that indicated by those influences. Yet if he did not make this realization, he would have been drawn to the pleas of his mother and the minister; he would have been terrified by the burning cross outside of the courtroom. "But sometimes," Bigger tells Max, "I wish you hadn't asked me them questions. . . . They made me think and thinking's made me scared a little"(495).

But Bigger Thomas is a product of his environment. Even thinking doesn't change that. Bigger has been bred to hate by forces he cannot control. While he has no desire to kill, he accepts that he has killed and does what he consequently must. That he knows his actions are wrong is not enough to counter the forces of rage burning in his belly. This fire has been stoked by years of squalor, over-crowding, opportunities denied, and dreams deferred. While Bigger does realize that he can act otherwise, by then it is too late; the fire has already broken free, and is now just burning itself out.

Steinbeck's In Dubious Battle is not a book ripe with imagery. The text relies heavily on dialogue; I can't count what the characters say is imagery, since imagery is an expository device. Imagery is used to describe, to provide a picture of what is being discussed. When Steinbeck writes "Lisa looked in, with bird-like interest," he is using the image of a bird to describe the girl. We can picture quick, jerky head movement, hesitating half-steps, and a rustling flutter as she sits. This is an effective use of imagery; it reinforces our idea of Lisa as a timid, cautious girl.

The characters in this book are fruit tramps. They work in orchards; they talk about their work. Much of the novel's exposition is given to describing the settings through which they move, which necessitates detailed description of the out-of-doors. The rest simply reports what they are actually doing. A paragraph from chapter 15 will work as an example of what Steinbeck does with his exposition:

Through the trees they could see Anderson's little white house, and its picket fence, and the burning geraniums in the yard. "No one around," said Jim.

We have four adjectives in this paragraph: little, white, picket, and burning. The whole scene creates a precise image, but there is nothing I would latch onto as imagery, per se, in it except the description of the flowers as aflame.

The most effective imagery, as a rule, is drawn from characters themselves: it rises naturally from what they say, what they do, and where they are. An author who can draw on these areas to create images, to make scenes clear without resorting to set-piece description, blesses his readers. That is what Steinbeck does. He uses food to explain, or stand in place of, the attitudes of the strikers; he uses the over-flowing orchards as symbols for the crimes capitol commits unthinkingly; he lets dialogue do the dirty work of setting tone throughout the book. He writes with great economy, not wasting words on narration when they can be spoken by a character. This gives his characters more depth and believability; it gives his readers a story that moves quickly from page to page; and it makes imagery difficult to discover.

The images we do find are almost all related to the earth: to the soil, vegetation, and animals. When Mac opens Anderson's gate, for instance, the hinges "growled": real imagery, even. More often, the images come from the characters, in dialogue: London calls Mills bombs "pineapples"; Lisa and Jim talk about cats; Mac tells Jim he stands out "like a cow on a side-hill."

Steinbeck's use of imagery, then, is subtle and atypical. By allowing his characters the freedom to speak, without the imposition of a heavy-handed narrative voice, Steinbeck shows us the images they see. This not only makes the story more vivid, as imagery ought, but it also strengthens the characters and keeps the plot moving without the distraction of set-piece description.

Inductivism was the school that thought science began with observations and once a sufficient number of varied samples had been made, if none of them falsified the theory, a universal law could be derived from them–– that is, from observing nature, the way nature behaved could be understood. As the name implies, this view of science stood on induction, the making of generalizations. But as we have all found in everyday life, sometimes the generalizations don' hold true. Maybe your alarm clock has gone off every morning since you bought it. From this, you could inductively conclude that your alarm clock goes off every morning and will continue to do so. But if the power goes out one night and your clock stops, it won't go off the next morning. When you finally wake up, you will learn that your law has been falsified. While you Ümay believe that your alarm will go off in the morning, you can never know that it will until it does. So the big problem Inductivism faced was not falsification–– which merely proved that a theory was wrong–– but the chance of future falsification. Inductivism was relying on the past to predict the future, when there was really no reason to believe that the future would be anything like the past. An inductive argument can never prove something conclusively, it can only show what is likely to happen.

Falsification was meant to be a better–– more accurate–– view of how science really works. For the Inductivist, anything involving a sufficient number of varied observations which don't falsify the generalizations, and which makes predictions that are either proven or disproved by the observations, is a science. But this leads to calling some really pointless data–gathering exercises "sciences," and that didn't seem right. Yet the criteria were also too narrow, because no number of observations could prove a generalization. So a new demarkation for science was sought–– new criteria for qualification as a "science," and these guys decided that if a statement was scientific, we must be able to state which observations would prove it false. If a statement is unfalsifible–– if the results it predicts are unobservable, or will be true no matter what happens, or if it resorts to ad–hoc defenses in staving off falsification–– then it isn't scientific. And this seemed like a logical enough solution. After all, once an observation came up to falsify a theory under either view, that theory was kaputz. Falsifications were death blows for any theory: they proved it wrong under both Inductivism and Falsificationism. But unlike Inductivists, who always had to worry that this might happen and spoil their pretty laws, a Falsificationist wasn't making a law. They had abandoned induction–– and supposedly, all the problems that went with it–– for the idea that one can't make laws that explain the universe, but only try to explain the universe, so lets get on with trying. It was the antithesis of Inductivism: since one observation can prove a theory false while infinitely many can't prove it true, stop trying to prove it and try to prove it false. If, after all efforts, it still hasn't been falsified–– hey, it might be true. We don't know–– and never will–– but we'll assume it is anyway and go on.

Falsificationism does this by following standard scientific procedure–– the scientific method we learn in junior high school. They notice a problem, formulate a theory, and set about testing the hypothesis through experiment and observation. If the results falsify their original hypothesis, it is rejected; if it is supported–– corroborated–– further efforts are made to falsify it. If all attempts to falsify the theory instead corroborate it, it is conditionally accepted, as a fairly accurate description or description–– provisionally accepted, not as fact or as something true, but as the best description currently available. But it must be remembered that corroboration does not equal proof; a theory may be falsified at any time. Karl Popper was the chief spokesman for falsificationism, and he maintained that it requires a critical attitude of scientists–– a willingness to subject even their pet theories to strenuous examination, and a willingness to let them go when falsified. The critical attitude is much like the examined life Socrates advocated: a questioning, a seeking for new truth, open–mindedness in listening to criticism, as willingness to be wrong, and the resilience to try again. Without this attitude a scientist would cling blindly to her theory in the face of all evidence to the contrary, no truth would be found, and no progress would be made. Popper, though, calls on scientists to be good sports.

Now, an Inductivist would look at this in wide–eyed wonder, because while it is supposed to get around the problem of induction–– the fact that we can't base the future on the past because we can't conclusively prove that it will happen again–– it is, in essence, doing just that. It seems to say that because we have never proven this doesn't work, it will keep working. However, the Falsificationist will quickly point out that while he accepts a tenet for practical reasons, like a foundation for further work, this doesn't make it true, doesn't claim it as true, and doesn't rule out change. "In fact," he might say, "we expect to do away with this, eventually–– but right now it's the best we have." See, falsificationism doesn't try to justify its conclusion; it doesn't even claim that they will continue to work. It uses induction, yes, but doesn't count on it. Sidestepping the problem of proof through induction is one advantage of falsificationism, but there are others. It also allows for the "theory–ladeness" of observation, which was a criticism of the Inductivist's actual method, not her theoretical merit. In falsification, one is looking for specific things that are relative to her theory. Inductivists, on the other hand, are required to observe all things, pertinent or not–– because they aren't supposed to know what is pertinent until after seeing it all–– and base their conclusions on an unbiased assessment of everything observed. This was shown to be quite impossible, in practicality–– every scientist studied has gone into experiments with an idea of what to look for. Falsificationism allows for the importance of theory in experimentation, and is in fact based on it(doing experiments which try to falsify, remember? Looking for things that don't jive with the theory...). Finally, Falsificationists don't work in the historical vacuum of an Inductivist. They build on what has gone before, even while admitting that what they are building on may crumble under a new falsification any time, thus bringing down their work, too. Meanwhile, the Inductivist has to start from observations, always, and each new Inductivist has to make her own observation before she can make a generalization. Falsificationism is much closer to what scientists really do.

If more than one theory is competing to explain the same phenomena, Falsificationists have criteria for choosing the better one: the degree of falsifiability–– because the easier it is to falsify a theory, the easier it will be to prove that it isn't the best one if it isn't–– and its generality. The falsifiability of a hypothesis depends both on its clarity and its precision. If a theory is vague, it may fall into the error of claiming universal confirmation, like astrology–– claiming that the results support its conclusion, no matter what those results are. If it is imprecise in its predictions, the results will be unobservable and thus unfalsifible. Generality is desirable because a more general theory will explain more things. It offers an explanation for more occurrences, and also has more chances to be falsified. For example, if I have a theory about why roses are red, but Gregor Mendle has a theory that explains why roses are red, violets are blue, and the colors of all other flowers, too, his theory would be preferred. Not only would it explain much more than mine if it was right, but many more things could falsify it. Only roses could falsify mine, but roses, bluebells, hollyhocks, belladonna, jack in the pulpit, or any other flower could falsify Mendle's. Finally, a theory will be rejected if the modifications it makes to avoid falsification are ad–hoc–– that is, if the consequences of the modifications aren't testable, or are no more testable than the original theory. Legitimate modifications are testable...

Utopia, the introduction to my copy tells me, means 'nowhere'. Apparently, Thomas More wrote it to give us an example of good government: We made no inquires, however, about monsters, which are common enough. Scyllas, ravenous harpies, and cannibals are easy to find anywhere, but it is not so easy to find states that are 'well and wisely governed'(p.4). This is a frame story. An ambassador from Henry VIII of England, named More, meets a traveler and invites him to dinner. Before the meal, they talk about his adventures, and focus on Utopia because it is the best-governed state he has seen. Before they do this, though, More asks why he doesn't work for a prince, like Machavelli did. With his store of wisdom and experience, he could be a great help. The visitor responds with a bitterly accurate assessment of why he wouldn't: courtiers are after power. To keep their power, they would ridicule his good but different ideas(like not invading another country, since running one is more than job enough), and he would end up achieving nothing while being miserable. As it is, he is happy and the princes can read Machavelli if they really want sound advice.

I have trouble reading this as satire; I realize that criticism was at least a large part of the intent, and I definitely see the humor in More's names when I check the footnotes. I also see the criticism, especially in Book One. But perhaps I am too far removed from the system he is criticizing to really appreciate it.

Book One is the more enjoyable part of 'Utopia'. The dialogue gives it some feeling of interaction, unlike the cataloguing in Book Two. The dialogue also provides greater opening for humor. Also, while Book Two's demonstration of good government shows how the English system had gone wrong, I think that the direct discussion of it in Book One provides more effective criticism. Book Two describes More's fantasy, while in Book One he deals directly with the problems he sees in the current system.

I was drawn to 'Utopia' in an odd way: Abbie Hoffman's 'Revolution for the Hell of It' crystallized a discontent in me when I was eighteen, and that lead me to look for, or at, alternatives. A book called 'Utopia', since the word has become synonymous with 'ideal society', seemed like an obvious place to start.

While Book One is more fun, it is in Book Two, where More directly relates what the traveler has told him, that is really of interest. In this part, he simply describes everything about Utopia and its inhabitants, from their agriculture to marriage customs and moral philosophy. I agree with much, if not most, of what he says. He presents a truly communist society. In it, everyone works, and everyone takes what she needs. This is possible because the Utopians do take only what they need: they are not at all materialistic; their only greed is for knowledge and intellectual stimulation. They do not even really have a concept of money. Gold and silver are used for toilets and bondsmen's chains, and only spent on military expenses(which are only defensive). This moneyless society perfectly meshes with the ideals Hoffman gave me, and makes Utopia a place I really want to see before I die, like Paris and Rome and Alaska.

I do, however, disagree with several specifics within this wonderful system. For starters, they keep criminals as bondsmen. It isn't even particularly hard labor they're set to; conditions are infinitely better than the gulags. But they are sentenced for life whenever sentenced. Any crime(that isn't a capitol offense, like adultery) gets you life on the golden chain–gain. This is just extreme. I realize how generous this is when compared to hanging by the neck until dead, dead, dead, and I know that the Utopians occasionally release bondsmen for good behavior, patience, and repentance, but it seems that they should weigh the sentences to reflect the severity of the crime. Eliminating crimes of property does eliminate many petty offenses, but some things are still worse than others.

My other major complaint involves religion, so it essentially undermines the entire book. More give his Utopians religious freedom, but makes them gravitate by force of reason to the acceptance of one supreme(Judeo–Christian) being, and has them converting to Catholicism in droves, as soon as the traveler exposes them to it This, of course, reflects More's religious views, just as 'Island' most likely incorporates Auldous Huxley's views into his utopia. I happen to disagree with More's views. It doesn't seem possible to me, looking at Western history, to embrace the dualistic thinking of the Church and live in a perfect society at the same time. I say this because Christianity is a religion of oppression: saying, 'Those who are last shall be first' makes being last bearable; it lets the oppressed feel that they will be vindicated for their suffering, once they are dead.

The Utopians are virtuous, yes. They liberate other countries from tyrants. But they subscribe to a world–view that makes oppression possible, and has lead to much oppression. I do not think that their world-view is compatible with the idea Utopia has come to imply. Still, I love this book. It speaks of one person's vision of a better world, a world we could live in, if we only gave up one thing. Getting rid of money is, I think, the first step to a true utopia: it would immediately make everyone equal in one respect: It would force a re–evaluation of needs and priorities. It would make people into ends, rather than means. This book gives me hope.

In the previous two novels, we have seen Studs Lonigan go from a boyhood full of potential to a manhood wasted on booze. Now, in his final book, we see the end to which this leads. To make his moralizing more effective, however, Farrell needs to make Studs more representative of America that the drunken Irish stereotype he has drawn thus far. He does this by confronting three subjects which are experienced by all: death, love, and money.

The book opens with Studs returning to Chicago from a drinking buddy's funeral in Terre Haute, and it closes with him lying dead. It seems fairly obvious that all Americans will go through experiences like these--while we might not all have friends, we will all die.

On this opening trip back, though, we learn of Catherine--a new character. Catherine loves Studs. She, when he asks her, agrees to marry him; she gives her body to him, and is carrying his child when he dies. Of course, this relationship isn't always rosy, but how many are? The engagement is even broken for a while. But when they are together, they do typically American things, like go to the movies, the World's Fair, and even a dance marathon.

Yet Studs becomes a representative American not through something he does, like dying or falling in love, but through what happens to him: the Great Depression. Because of this, he suffers. His father goes bankrupt; he loses his job. He loses money on the market. He even tries to get a sleazy job selling sanitary drinking cups. He can't afford to get married. Life is hard, as it was for most people during the Depression.

While these experience don't change the fact that Studs has become a very limited character, they do make him into someone who can be identified with by more than just the Chicago Irish community. In this regard, Judgment Day is the best novel of the Studs Lonigan trilogy.

I consider the statement "Life is worth living" to be prima facia true. Perhaps that is why asking if life is meaningful gives me so much trouble. It is not a question of "is life worth living" or of what gives meaning to life, but of what makes life meaningful: what makes an individual life meaningful, and to whom. A meaningful life is not one that has meaning to the person living it, or one that is worth living, or one that is good, necessarily. It may be any or all of these, but it is also something beyond that. It is a life that touches others, a life that is somewhat universally and historically significant, as if we were looking down, counting all the lives and going "Yup, that one's important––" it must have an effect on others to be meaningful(and perhaps obviously, the greater the effect, either or both in number of people effected and magnitude of individual effect, the more meaningful the life). This is the only way we can tell if it is meaningful, objectively: through its effect on others.

Perhaps I should now distinguish between a meaningful life(one which has meaning to others) and a life that has meaning. Any life can have meaning: meaning may come from a sense of purpose, or a passionate involvement, or from looking for meaning in life. Meaning is objective, yes: it is derived from a nameable something; a life is, however, only meaningful hyper–objectively. Any number of things can give meaning to a person's life; none of this necessarily makes it(hyper–objectively) meaningful. Some may object to this, saying that because they think or feel that their lives have meaning, they 'do' have meaning, and/or are meaningful. However, this claim is entirely subjective; to 'have' meaning, one must have objective somethings giving life meaning, not just a feeling that it has meaning.

To clarify this, let me explain what I mean by a life touching others(a meaningful life), by saying that most people don't matter to me. This sounds harsh, but I would not be effected by the death of most individuals currently alive(nor was I effected by most people who have already died)––simply because they have had no part in or impact on my life. In fact, only a small number of persons throughout history have individually changed my life: Shakespeare, Stalin, Beethoven, Christ, and Abbie Hoffman come quickly to mind. Of course, this is not including persons I know, or my family. Let us now consider them. Aside from my parents, who would have influenced my genetics even if they had not raised me, how may of these people would have effected me if I had never met them? Of course, they 'did' effect me, because I did meet them, and thus they have been meaningful to my life. But you can see how few people are actually meaningful to me. It is the same for everyone, I am sure, including myself. Except for personal contact, I doubt I have effected anyone. And even among those I have been in contact with, and thus effected to some extent, only a few would flinch upon hearing of my death, and undoubtedly none of them would have had a much different life if someone else had been born in my place. While of course my life seems meaningful to me(I am the most important person in my world, meaning that I am the one I consider first and foremost), that is a biased and subjective judgment. Take me away from my life, and who does it matter to, now that it no longer matters to me? My life then, except to a very few, has not been meaningful. It has made no impression on, or required any response from, the lives of others. It can have meaning to me, but to be meaningful, it must be meaningful to someone else. It is even conceivable that a life could be devoid of any meaning(an infant, for instance), and yet still be somewhat meaningful(to the parents). The two are not necessarily related. Yet for an autonomous life to actually be meaningful, it seems that it must effect more people than those who would be effected by its passive existence(parents and nurses, for example). I say autonomous because some persons have not yet met this criteria(children, or some of the mentally handicapped), and I do not want to dismiss their obvious meaningfulness to those close to them. I do, however, include those who have lost, or given away, autonomy(the aged, those in coma, or the heroin junkie): their lives may have been meaningful, but that doesn't make them meaningful now––which is not to say that they no longer have meaning. They may. This stance comes from a belief that life is not necessarily meaningful, yet life is a good thing to have and thus worth living(this is not to say that other factors may not outweigh this intrinsic value, and has nothing to do with my position on euthanasia, suicide, or abortion and infanticide). This conflict made me question not only what gives a life meaning, but what makes it meaningful. While looking for an objective standard of meaningfulness, I realized what my criteria was(effecting others), and found that most lives have the opportunity to be meaningful: the lives of parents are generally meaningful to their children and visa versa, as are those of friends, and teachers to students, et cetera, because of the influence each one has on another individual. Yet this means that regardless of how full of meaning a life is, it can only be(hyper– objectively) meaningful in the context of others. Something which gives meaning to my life does not necessarily make it meaningful to others. For example, if I were marooned alone on a desert island, I could find meaning for my life in the creative process of writing poetry(more on this below). But unless my work reached other people, it would not have any effect. My work, while being intrinsically valuable and giving me satisfaction(and giving my life meaning), would be meaningless.

Of course, life's being meaningful only in the context of others is contingent upon some value or meaning 'in' the lives of others. After all, if my life is meaningless without influencing others, but others are meaningless, I have no basis for being meaningful. Meaninglessness compounded upon meaninglessness does not create a meaningful anything, but only multiplies the meaninglessness. However, this does not really pose a problem. Because they are alive, most people are to some extent meaningful(whither or not their lives have meaning), as I have explained. But even if I am living my meaningless island life, Shakespeare is still meaningful to me, because I can derive meaning from reading and studying his work. Thus I have meaning, and Shakespeare has someone to be meaningful to.

Or perhaps this is really a question of whither or not humanity is meaningful. Allow me a return question: How would I be effected if there were no species homeo sapiens? The belief that life is not necessarily meaningful was crystallized upon reading Richard Taylor's 'The Meaning of Human Existence'. Taylor's main argument is that lives which have no purpose are meaningless, and ultimately, human lives are no different than those of animals: we repeat a pointless cycle of actions, achieving nothing but our own continuation and the continuation of our species and driven only by instinctive desires, for life's entire duration. Even higher goals, like artistic creation or athletic feats, he says, are no more than a peacock's preening before a hen: a method of making ourselves attractive to the opposite sex. This is quite bleak, and if correct, life does seem rather meaningless. Yet Taylor says that some lives can have meaning, if they have an over–riding purpose: striving toward a particular, realistic or attainable goal of the person's own choosing and design. In other words, he says that creative and intellectual work can give life meaning. Thus, according to Taylor, choosing to write poetry in my isolation would give my life meaning(even though it is no more than preening before an imaginary mate, by Taylor's own account). However, these standards still leave little hope that most people can lead lives which have meaning; most of us are not able to devote our lives to such pursuits.

While part of me agrees with this assessment(the part which makes a distinction between life having meaning for the one living it and actually being meaningful), another part of me was very glad to see Thomas Nagel's chapter, 'The Absurd', argue that some goals(or "pointless cycles") are intrinsically valuable or self–justifying, and that not even the highest of purposes(or one that is attainable, and both chosen and designed by an individual) is ultimately justifiable. One can, after all, ask why one is doing that which is most worthy of being done: what makes that the best thing to do? This satisfies my other belief: that even a meaningless life is worth living. The process of living is intrinsically valuable––or at least parts of it are. In another chapter, 'Death', Nagel expands on this by arguing that if death is an evil, it is only because it deprives us of life––thus ending "all the goods that life contains." These goods, or components of life, such things as thought, perception, and desire, are "widely regarded as formidable benefits in themselves;" they allow us to do and experience things. It is this ability to experience and do, he indicates, that makes life worthwhile even if what is being experienced is more unpleasant than pleasant: experience is worthwhile, regardless of its content. Life is good, then, because it gives us the opportunity to do things and thus experience things.

So life is worth living: it provides an opportunity for experience. But does what we experience have(or give us) meaning? Not necessarily. Meaning is like the bluebird of happiness: you can only catch a glimpse of it from the corner of your eye. It is not something you can simply 'have' or 'get', it is derived from something else: active participation in or pursuit of something else gives life meaning. Thus it is that Taylor can argue that creative acts can give life meaning, while Jonathan Glover proposes that some forms of work can, and Peter Singer puts forward the pursuit of a moral life as a source of meaning. All these, and many other(love, or pursuing an education, for example) activities can give life meaning––they are all active approaches to life. They all require doing something––and thus, also provide opportunity, not only for life to have meaning, but for it to effect others and thus be meaningful. The secret, my friend, is involvement. But what would be a life without meaning? It would be a life of utter passivity; a life spent(or squandered) on the pursuit of nothing. Even a person who spends all her time avoiding challenges or activity does something: she avoids. A life without meaning would be spent in a natural stupor, a coma perhaps. I really don't know. Getting high or drunk gives an addict meaning(though not an admirable one); they at least do something actively, and they experience something. The meaning of life? There is no meaning of life. Life is a process; life simply is (so proceed). It can have meaning––you can find meaning in something other than life, by doing something which will give meaning to life––but there is no meaning of life, native to it. Meaning must be sought elsewhere, and may not always be found: not all lives will necessarily have meaning(though they may be enjoyed, and indeed, people may feel that they have meaning nonetheless), and likewise, not all lives will be meaningful. However, by pursuing something other than life, we are not only much more likely to find meaning in life, but to make our lives meaningful.

British theater had its heyday between 1580-1630, when Marlow, Shakespeare, and Johnson were all writing for the stage. While the plays Dr. Faustus, Henry IV, and Volpone are all serious, they each incorporate comedy as a means to their desired ends.

Marlow brought major innovations to the stage by resurrecting tragedy and introducing blank verse, yet Dr. Faustus can be read as a basic morality play. The plot is simple: Faustus gives his soul to Lucifer in exchange for twenty-four years of unlimited power. The tragedy is Faustus' struggle with accepting damnation; the moral is a warning against pride and the lust for power. The rest of the play is composed of humorous scenes. Many seem like filler; the low comedy of Robin, Dick, and their adventures adds little to the plot. They are, however, significant. Act I.iv is the most important of these. This scene parodies the action of Faustus. Wagner, imitating his master, uses magic to acquire his own slave. Having agreed, under pressure from the devils Wagner calls forth, the clown says, 'I will, sir. But hark you, master, will you teach me this conjuring occupation?' 'Aye, sirrah, I'll teach thee to turn thyself to a dog, or a cat, or a mouse, or a rat, or anything.' 'A dog, or a cat, or a mouse, or a rat! O brave Wagner!' (ll.37û43) Their dialogue shows the absurdity of using magic, and foreshadows the sophomoronic pranks Faustus later plays on the pope, Robin, and Dick. This approach to a morality play is a reductio ad absurdum argument. Marlow not only shows us the torment Faustus suffers from his choice, but uses these scenes to show us how little he really gains from it. For all his power and all his suffering, Faustus acts like a fool.

Shakespeare's Falstaff is also a fool, but Shakespeare is using comedy to show character development. Act I.ii shows both Falstaff and Hal in fine form, exchanging wordplay and making jokes about Hal's future. In wondering what kind of reign Hal will have, Falstaff says, Do not thou, when thou art king, hang a thief. No, thou shalt. Shall I? O rare! By the Lord, I'll be a brave judge. Thou judgest false already; I mean thou shalt have the hanging of thieves and thus become a rare hangman.(ll.50û55) Falstaff takes this, too, out of context, turning it into a job for himself, instead of a projected fate. At the end of the scene, however, Hal decides to take his role as prince seriously, and the rest of his humor is spent in an elaborate joke on Falstaff. When the joke is played out, he returns to Henry IV and reforms. Falstaff, on the other hand, continues his heavy drinking and refuses to take anything, including his own death in V.iv, seriously. This provides a foil for Hal, letting us see how much he has changed. When Falstaff, in the battle scene V.iii, tries to engage Hal in the kind of wordplay they had enjoyed earlier, he is cut off with, 'What is it a time to jest and dally now?'(l.51) Hal does retain traces of his wit, finishing Hotspur's dying words with 'For worms.'(V.iv l.87), but unlike Falstaff, who jokes about playing dead and killing Hotspur, his mind is now on other things. Like all of Shakespeare's work, the play is also liberally spiced with puns, like 'herein I will imitate the sun(son of a king),' I.ii l.164. These do not really add to the story, but make the text engaging and thought-provoking.

Johnson's goal in Volpone is quite different: he uses an acidic wit to satirize social trends. Satire relies on comedy to hold an audience which might not otherwise want to hear itself criticized. Volpone might, because of its message about greed, be construed as another morality play, but unlike Dr. Faustus, where comedy provides a second line of argument, in this case comedy is the vehicle itself. Volpone is a wealthy man who gets money from people (Vulture, Kite,/ Raven, and Crow)(I.ii ll.87-88) trying to buy their way into his will. He 'glory[s]/ More in the cunning purchase of [his] wealth/ Than in the glad possession'(I.i ll.30-32), a comment on ill-gotten gains, and lives frivolously, keeping a eunuch, a dwarf, and a hermaphrodite. Yet this play is not funny if one accepts the fictional premises. If we do not see something very wrong about Volpone's lifestyle, it is actually tragic. He dies. If one does, however, keep an idea of how the world ought to work in mind, the situation is hilariously wrong. One particularly bitter incident involves Volpone and Celia, the wife of a gold-digger. When Volpone hears of her beauty, he dresses as a street-hawker and goes to her house for a look(II.ii). She throws down a kerchief full of coins for his potion and her husband flies into a jealous rage. Yet when, in II.vi, he hears that Volpone wants her to nurse him, he is anxious to prostitute her so he can gain favor. Johnson relies on ironies like this to build the play, giving him a base of subtle humor. He is more direct in dialogue: Volpone is mercilessly funny in describing his suitors as carrion eaters in I.ii; Lady and Sir Politic Would-Be continually spout embarrassingly silly lines, and the judges in acts IV and V sound like stupid old fools, repeating each other and asking what the laws are, instead of trying to discover truth. When they finally do pass judgment, in V.xii, it is only after Volpone has made such a fool of himself trying to regain his fortune, and Mosca a fool of himself trying to keep it, that guilt must show through their conflicting statements.

So it is that each of these great playwrights uses humor differently, yet effectively, in achieving their dramatic purpose. This may be a tribute to their genius, or perhaps it is due to the versatility of the comic device. If the former, we must simply stand in awe; the latter gives us some hope for the future of literature, as well.

Chapter two, "The Rise of the Novel," in Terry Lovell's Consuming Fiction, grounds the book in critical debate. This chapter addresses Ian Watt's book of the same name. By beginning with an assessment of what Lovell feels is the primary work in her field, Lovell establishes both her authority in the field and the basic assumptions from which she will work. Lovell sets forth, and tries to set right, what can be seen as flaws in Watt's book--attacks designed to expose and correct weaknesses which could otherwise tumble Watt's thesis, which she explains is that "the primary parenting of the novel. . . was performed by capitalism."(45)

Lovell begins by delineating the assumptions of Watt's thesis that the novel is a bourgeois form. These are, she says, that it was developed by, and for, the new middle class; that this development occurred simultaneously with the rise of a faceless audience; that it served the ideological needs of the bourgeoisie; and that the formal realism it displayed accurately reflected that general bourgeois outlook.

Lovell then defines Watt's central term, formal realism. This style developed from the philosophical belief that reality consists of particulars, rather than existing in abstract forms. This was demonstrated by characteristics which serve to define the new novel for Watt. First, plots were new--created by the author, rather than recycled from earlier literature, and they mirrored the lives of real people. Real "people" were the characters, too: novels didn't rely on stock characters, but tried to create three-dimensional persons, and gave them real names instead of simple type-names. Thirdly, the laws of cause and effect became the primary advancers of plot. Perhaps most noticeably, however, the action occurred in real places, and language was used to convey information about those places, so readers could be credible of the story. Lovell concludes from this that,(22)

Watt's thesis, then, proposed a tight interconnection between three phenomena, all themselves directly or indirectly a function of the development of capitalism: the conventions of formal realism which he found to be characteristic of the early novel; the values and mental attitudes of the rising bourgeoisie which he characterized in terms of Max Weber's spirit of capitalism; and the shift in literary production to the commodity form, produced for an anonymous middle-class readership.

She then proceeds to identify what she calls "Some Problems in the Thesis." The first of these is with the term "Formal Realism," and its use to define the novel as a genre. Using any term described by a set of conventions, she says, will necessarily constrict the criteria for inclusion within a genre. On the other hand, if the definition is not sufficiently narrow, it looses its value as a definition. The problem with Watt is most apparent one the shelf in a bookstore: Frankenstein is on the same rack as Moll Flanders; both are under "Literature." Both are thought of, in general usage, as novels. The conventions of formal realism, however, exclude Shelly's work because it is "gothic."

If Watt were simply drawing literary conclusions, his abiding by literary conventions in choosing formal realism to define the novel would not be out of order. Since, however, Watt is examining the novel's history in a sociological context, he should be compelled to consider what the people of the time actually read. Pulp fiction exists because a market exists; pulp fiction tells us what that market wants to read. That market has never felt constrained by the conventions of formal realism, and formal realism does not accurately describe everything the market of this time demanded. Watt, to demonstrate his thesis fully, would need to expand consideration of what the novel is, to include other types of well-developed prose fiction.

Lovell's next set of criticisms comes under the heading "Spirit of Capitalism." The first of these deals with social class and authorship, because the act of writing for unknown readers makes one the producer of a commodity. The author is, definitionally, a member of the petty bourgeoisie. Yet this economic status is not reflected by social status: authors, even at this time, came from all levels of society, from John Bunyan in a prison cell to Jane Austin, the daughter of a clergyman. In fact, Lovell points out, most of the long fiction published at this time was produced, not by the middle-class as exemplified by DeFoe, but by the remnants of the pre-capitalist aristocracy.

Also, by defining the novel in terms of formal realism, Watt ignores the fact that capitalism has two faces. The shining face capitalism shows the world extols the virtues of thrift, hard work, and persistence, but hidden behind it is the need for spending to feed the system. The tension this paradoxical situation creates is reflected in literature by the literary tension between the respectable art of formal realism and the exotic escapism of the gothic and other "romance"-type novels. Both are expressions of, and reactions to, the development of capitalism; to ignore one because it lacks respectability is foolish. It not only excludes from consideration a major portion of what the market demanded, but also categorically ignores the readers that market represents.

Lovell addresses this in her final section, "Women as Intellectuals." First, she asserts that women comprised a significant sector of the reading public. This became possible as the middle-class was, because of surplus earnings, increasingly able to divide life into public and private life, thus removing wives not only from the workplace as workers, but also moving their homes. Women were then delegated the task of consuming the surplus, while the men went on creating it. The novel provided an easy entertainment; it was not too expensive, and could be enjoyed in pieces which fit nicely around other duties and activities.

And women also had the leisure to write. Not only were most writers from the gentry, but, in a fact which Watt brushes aside, most were female: "daughters of the middle class, aristocracy, and professions"(90). These women had the education to write, but were excluded from other intellectual activities, such as politics, and unlike men, were not pressured for immediate financial success. These factors allowed much to be written; to categorically deny that this work has value is an injustice.

Yet Watt's thesis that capitalism and the novel are undeniably linked is of value. His greatest problem is inconsistency: while his criteria for selecting the works to be studied are literary, his explanations are sociological. Lovell simply hopes to point out that a conflict does exist between these two modes. "His literary criterion of value is certainly open to question for its sexist bias." Lovell says, "But his sociological criteria should have compelled him to pay attention to the women writers he ignores"(44-5).

The Young Manhood of Studs Lonigan continues the misadventures of our young hooligan as he continues to grow up in his Chicago neighborhood. However, unlike the first book of this trilogy, we are not limited to a helmet-cam view of the world through Studs's eyes. In the first novel, all but two chapters are told from Studs's perspective: the second, which is given to his father, and the second to last, which focuses on Davey Cohen. The entire novel is placed in, and limited to, a small section of Chicago, and almost no reference is made to a world outside this neighborhood. YMSL, on the other hand, opens with an italicized chapter told from Lee Cole's perspective, and introduces World War I. Both are drastic deviations from the pattern previously established.

Of course, Studs is still the focus of this novel, and most of it is told from his point of view. That Studs is aware of, and interested in, something as important as the war is only natural; as a sixteen year old, he even tries to join the army. This may just be indicative of his discontent with his current situation; it may, on the other hand, show that Studs is growing as a human being, becoming aware of the larger world around him.

The italicized sections, however, are the more blatant attempt to expand the novel's scope. In these, we follow runaway Davey Cohen as he visits gutters around the country; we see race riots; we see the plight of a blacklisted union man as he worries what will become of his family. In these chapters, we see a different America. This is not the middle-class youth of America gone slumming; this is the low end of cut-throat capitalism's food chain. This is Davey Cohen, "so unhappy that he envied a dog"; this is Joe Lonigan making great sacrifices to send Tommy to high school, then having to borrow money from Studs's father to pay back someone Tommy had robbed. This is a black bank being blown up to get the 'nigger' out of a white neighborhood.

We get chapters, now, from Red Kelly's perspective; from Loretta Lonigan's, and even from that of the new St. Patrick's Church. If this isn't going beyond Studs's perspective, I don't know what would be. We know more, because we know what people other than Studs are thinking; still, we are in keeping with the expectations raised by the first novel, in that most of the book is told from his point of view. Farrell could have given this up, could have switched to an author-omniscient perspective for this book. But that would have violated reader expectations, and not necessarily expanded the novel's scope. What he has done seems a successful compromise, for the most part, between the desire to grow and the need to remain focused. While it is disconcerting to break the narrative, Farrell's technique forces attention away from Studs, and also provides a neat segue between disjointed episodes.

The best part of this technique, however, is that it allows Farrell to end the book with Stephen Lewis kicking a can down 58th Street, exactly as so many other kids do. That a boy can play in Studs's old neighborhood is only fitting, and shows how things continue as they always have, with the sole difference being his color. This irony would be lost on Studs, but almost makes me cry.

This paper demands a bit of imagination, because the issue under discussion is not yet an issue at all. It is the debate over what constitutes personhood, and whether or not created beings, that is, machines, can ever be considered as such. Stop for a moment and imagine these futuristic scenes: The bridge of the Starship USS Enterprise, on "Star Trek: the Next Generation". A bald man sits in the Captain's chair; at his right, a bearded man, and a woman on his left. A pale, clean–cut man is at the helm. Or the final scene in the movie "Blade Runner". A man and his lover, a pale woman, are driving North out of Los Angeles. The helmsman in the first scene is Lieutenant Commander Data; Rachael is the name of the woman in the second. They look, think, and act like people; as far as we can tell, or are led to believe, they are persons. Yet they are not humans, or even naturally occurring life forms. They are machines: Data is an android, Rachael, a replican. For Data and Rachael, this is very much an issue. In one episode, Data, after twenty–six years of exemplary service, was given temporary command of the Starcruiser Sutherland. His human first officer refused to acknowledge his authority because he was not a biological life form. Replicans, on the other hand, were created as a source of inexpensive (slave) labor; when the replicans rebelled, their "retirement" was contracted for, although the very act of rebellion would seem to prove their personhood, thus making extermination, murder. Of course, "Blade Runner" is set in 2021, and "Star Trek" in the twenty–fourth century. No machines currently approach this level of sophistication. However, if we may assume that the field of Artificial Intelligence will continue to advance, it is quite conceivable that one day they may.

Since the integration of the diverse branches of Artificial Intelligence, which either stressed reasoning, learning, and symbolic processing systems, or perception and reaction, researchers have been trying to build mechanical creatures that could function and survive in the real world, thus incorporating mechanical perception, automated reasoning, natural language understanding, planning, and knowledge representation in various combinations (Wallich, pp.125–126). As of yet, no one has built a machine that will survive on its own for more than a few hours, or with even the intelligence of a mayfly (Wallich, p.126), but several machines have resulted that are worthy of note. Thomas Dean has designed systems that show second order intentionality, beliefs about beliefs, by planning how much time should be spent planning an action (Wallich, p.130), while SOAR uses a techniques called chunkng to learn how to solve problems. SOAR also has natural–language capabilities and sensory modules. Ultimately, these will be incorporated into a robot that can take, and answer, English commands and carry out the orders (Wallich, pp.130–131).

Another interesting project is Cyc, a machine full of facts which will soon turn to finding information on its own. It is designed to have the kind of knowledge that an intelligent agent would need to preform its tasks; it, however, is little more than a gigantic database. And while it knows that there is a thing called Cyc, and that Cyc is a computer program, it does not have self–consciousness: Cyc does not know that "it" is Cyc (Wallich, pp.132–134). While these machines are not things we would intuitively grant personhood, they show that Artificial Intelligence is moving in that direction.

A breakthrough may occur when researchers refine parallel distributive processing. This form of information processing is modelled after the human brain, and could possibly allow for faster processing in computers: instead of running a number of calculations through the same series of circuits to arrive at an answer, many different calculations could be performed simultaneously by interconnected circuits, thus allowing a quicker response than waiting for them all to go through the same circuits would provide (Churchland, pp.156–165). This might provide just the boost that systems based on reaction to the environment need: by considering many factors at once, instead of individually, reaction time would decrease, and their chances for survival (that is, not being stumped by the situation) would increase.

Yet there may be some who would object that, no matter how much like a person machines may be, they can never be persons because they are machines. Aside from begging the question, this response implies that machines cannot be persons because they are programmed: they are not free, as we are, to choose what they will do. Instead, they must respond the way they were designed to, even though, in the "Star Trek" episode mentioned above, Data displayed insubordination by acting on his own assessment of the situation rather than obeying the Captain's orders, which is what all officers are supposed to––are "programmed" to––do. This is not, however, a valid point for objection: Searle demonstrates that we (humans) are not "free," either––yet we do not doubt ourselves to be persons. His argument is that radical freedom, which allows the mind to play a role in changing the course of events as they would otherwise happen, is incompatible with the deterministic physical world science has exposed; nonetheless, he admits, we "experience" freedom (pp.86–88). We know, from personal experience, that when we voluntarily act in a certain way, other options were open to us. We were not compelled to act in that way; we chose it freely. The basis of this sense of freedom comes from conscious action: to act consciously, and not experience freedom, would be impossible. In the Penfeild experiment, for instance, one is conscious and aware of what is happening, but is not free: electrical stimulation causes the action. We are not free at this point because we have no control; we could not do otherwise. Yet this passivity is not experienced in voluntary actions: the feeling of freedom is an innate part of acting; otherwise, we would not be acting, but acted through (pp.94–95). This argument is, however, an appeal to ignorance. Just because an act was not coerced or did not have observable causes does not mean that it was free. It merely means that the causes were not recognized. And indeed, as rational beings, it is the case that some of these causes are our thoughts. We must simple recognize that these, too, are in turn caused, not independent (Dennett, p.247).

Yet, says Searle, we cannot give up this mistaken view of ourselves as free, the way we gave up the idea that the sun rises after Copernicus showed that this perception is caused by the Earth's rotation, because the notion of determinism––that everything we do is caused––does not adequately describe the experience we have in acting out these causes, as explained above. Reality is that we are completely determined, but we, perhaps as a result of an evolutionary development of the very structure of our consciousness, perceive ourselves as free (pp.95–97).

This lack of freedom, though, of course does not mean that we are not persons; Dennett makes this clear with his treatment of stances: design, physical, intentional, and personal. Each of these is a way of responding to some other thing; a way of predicting and explaining its behavior, save for the personal stance, which implies moral considerations and presupposes the intentional stance. The first three can each be applied, to some extent, to everything: the design stance explains in terms of what X is designed to do(a chair is supposed to hold a person), the physical stance explains in terms of what state it is actually in(the clock is unplugged), and the intentional in terms of beliefs, desires, and other "mental states(she wanted the alarm to go off)." We should use the stance which is most effective for each X: if X is a tree or a chair, the design stance will work perfectly well; if X is a soda machine, car, or clock, the physical stance is probably best, and if X is a human or a chess–playing computer, the intentional is most likely needed. This shows that just because everything can be explained and predicted in terms of design, or the physical causes leading to a result, does not mean that this is necessarily the best way: for computers and humans, it would be heinously cumbersome. Indeed, whenever the intentional stance is the most effective for explaining and object, that object is an intentional system, regardless of whether it actually has beliefs and desires or can be explained in another way (pp.233–238).

The personal stance and its moral consideration presupposes the intentional stance: the intentional stance incorporates the first three conditions of personhood (in a metaphysical sense. Personhood in a moral sense is dependant upon personhood in the metaphysical sense, and thus the personal stance should only be adopted towards persons in the metaphysical sense), which are that a person is a rational being, intentional predicates (beliefs, desires, and so on) can be ascribed to it, and it is treated as such: that is, the intentional stance is adopted toward it. Thus, the personal stance presupposes the intentional. The forth condition, though, is not met by all intentional systems: the object of the intentional stance must be capable of reciprocating, or considering and treating the system taking an intentional stance toward it as intentional. The fifth condition is that it be capable of verbal communication, and finally, it must be, in some way, self–conscious. Each of these requirements is necessary, but not of itself sufficient, for personhood (pp.268–270).

Many, if not all, things can meet these first three conditions: we can say that a sunflower turns because it wants light, or that a car stalls because it doesn't like, and thus doesn't want to, climb steep hills. However, the number of systems that meet the forth requirement, that of having beliefs and desires about beliefs and desires, or about another system having beliefs and desires, is much smaller (pp.273–276). Perhaps, among non–humans, only Dean's program that works out how much time to spend planning a response currently has these second–order intentions (although perhaps some animals do, too): it(behaves as though it) believes that it should spend an appropriate time deciding how much time it believes is appropriate for solving a problem, rather than just solving the problem. And even this machine does not take the intentional stance toward others. It acts as if it has beliefs about its own beliefs, but does not attribute beliefs to others.

However, by making verbal communication an additional requirement of persons; by requiring that the speaker intend the hearer understand that the speaker intends for the hearer to understand what is said, we necessitate third–order intentions and remove any beings that do not use language from the list of persons (which is, so far as we know, all but humans). This is not simply an arbitrary move to keep from considering other beings as persons; third–order intentions of this nature are needed for a communication encounter to have meaning (unless I understand that you mean for me to understand, I do not understand). Without this meaning, one cannot give or listen to reasons, and without reasons one cannot be argued into or out of an action or attitude, thus exhibiting a distinct lack of the rationality attributed to all intentional systems. And if a system is not intentional, it is not a candidate for personhood (pp.277–283).

Finally, the requirement of self–consciousness does not only mean that the system is aware of itself as the system, but can apply the communication of condition five reflexively. A person, then, is able to engage in conscious dialogue with itself, rationalize with itself, and persuade itself to do things, develop desires, adopt attitudes, and hold beliefs (pp.284–285).

Admittedly, there seems to be nothing, either biological or mechanical, outside of humans, that currently meets all of these conditions. But consider Data and Rachael. Data wants to be more human; he obviously meets the sixth condition by being able to convince himself that he wants something. Rachael did not even know until halfway through the movie that she was a replican, and she cried when she learned this, so she, in the same way as Data, also meets the sixth condition. Of course, these are fictional examples, but it is conceivable that we will eventually produce such mechanical beings, and the question remains as to what we should, and in actuality will, do if and when that time comes. It seems obvious that we will have to adopt the personal stance toward such beings; in fact, there is no choice. They will demand it.

Larkin, Philip. 'Collected Poems'. A. Thwaite, ed. London, 1988, Marvell Press & Faber & Faber.

Thomas, Dylan. 'Collected Poems, 1934–1952'. London, 1966, J.M. Dent & Sons.

Albert Camus has said that the only philosophical question of any importance is that of suicide: deciding whether or not life is worth living. This makes one's attitudes towards death very important, since death is the only choice one rejecting life has. It is not surprising, then, that death is a major theme in poetry: poets often make public their ideas about fundamental question, by confronting these questions in their work. With this in mind, we shall now examine the treatment of death by two modern English poets, and see what this tells us about life.

Philip Larkin's deceptively easy style and sometimes crude humour made his work very accessible, and have helped to make him extremely popular. Yet death lurks in Larkin's poetry. In spite of his comic pieces, death seems all–pervasive: just around the corner, or just across the page; 'just on the edge of vision'(Aubade, l.31). And in reading the serious poems, one feels that it is, indeed, ever present in Larkin's mind. Not that he seems morbidly obsessed with death, but Larkin's poems show a constant awareness, and fear, of it. Death, understandably, terrifies him. Trying to cope, and live, with this ever–present terror of inevitable nothingness is subject of several poems, yet Larkin never seems to overcome it. Instead, he accepts it; he resigns himself to the terrible nothingness of death. Larkin sees death as covering us, weighing us down('Going'), and closing in on us('Traumerei')––or is it that he sees the awareness of death closing in on us? Death––oblivion––will come for us all, but some, like the miners in 'The Explosion', may escape this knowledge and the resultant terror. For it is the knowledge of death, rather than death itself, which most seems to haunt Larkin: death is 'only oblivion'('The Old Fools', l.15). Knowing that we are alive, and won't be––knowing what we will lose when we die––that is terror. That is what makes 'The Building' so frightening: in hospital, 'All know/they are going to die'(l.57). This is what his old fools are mercifully no longer aware of; this is what haunts the speaker in 'Aubade', when 'realization of it rages out/ In furnace fear when we are caught without/ People or drink'(ll.35–7). While 'Most things never happen: this one will'(l.34), we 'Know that we can't escape,/ Yet can't accept'(ll.43–4). Still, in lines such as 'Post men like doctors go from house to house'(50), Larkin seems to resign himself to death. While in asking 'Why aren't they screaming'(l.12) of the old fools, he indicates that they should resent this approaching death, that resentment melts into resignation by the final lines: 'Well/ We shall find out'. If death is inevitable, and as he says in 'Aubade', 'no different whined at than withstood'(l.40), we really have no choice but to die. Larkin's conception of death itself, of what dying is, comes out especially clearly in 'The Old Fools'. It is nothingness, 'oblivion'(l.15); he describes it this way: 'At death, you break up: the bits that were you/ Start speeding away from each other for ever/ With no one to see'(ll.13–5). The nothingness of death is the mountain of time we will not experience; his old fools are too close to the slope to see where they will soon be, and have a second childhood to shield them from the fear. But we have a better perspective, and are terrified by its vastness. Larkin('It's only oblivion, true,/ We had it before'(ll.15–6)) notes the irony of this fear, but explains it in 'Aubade' when answering the argument that 'No rational being/ Can fear a thing it will not feel'(ll.25–6). 'This is what we fear,' he says, 'No sight, no sound, no touch or taste or smell, nothing to think with,/ Nothing to love or link with,/ The anaesthetic from which none come round'(ll.27–30). Before life, he says of oblivion in 'The Old Fools', 'it was going to end,/ And was all the time merging with a unique endeavour/ To bring to bloom the million–petalled flower/ Of being here. Next time you can't pretend/ There'll be anything else'(ll.16–20). This is a special way of being afraid, he says in 'Aubade', 'No trick dispels. Religion used to try,/ That vast moth–eaten musical brocade/ Created to pretend we never die'(ll.21–4). 'But superstition, like belief, must die', he says in 'Church going', 'And what remains when disbelief has gone?'(ll.34–5) Larkin can't believe in Heaven, and this leaves nothing but nothingness after death. Yet Larkin dwells too much on this nothing. Only in 'At the chiming of light upon sleep' does he even ask, 'Have I been wrong, to think the breath/ That sharpens life is life itself, not death?/ Never to see, if death were killed,/ No desperation, perpetually unfulfilled,/ Would ever go fracturing down in ecstasy?'(ll.16–20) But it is death that gives life urgency, and the ability to sense and feel, which we lose in death, that makes life different than death, that proves to us we are alive, and makes being alive better than not being born. By concentrating on the fact that death will take these away, rather than the value that they give life––by resigning himself to death, however resentfully, instead of throwing himself vigorously back into life with a renewed sense of urgency––he devalues the very thing he mourns. Larkin seems almost to resent life for letting him experience this 'Intricate rented world'('Aubade', l.47), because it is only rented, and he will have to let it go when the lease is up.

Dylan Thomas's poetry, on the whole, has a dark feel. This may partially arise from the density and complexity of his language and imagery, but it is also likely that any poem, randomly selected, will have some reference to death, and this spectre undoubtedly contributes greatly to the sense of almost uncomfortable darkness a cursory reading of his work will give. Yet a closer reading of certain poems give a very different, and, it seems, more accurate understanding of Thomas's attitudes towards death. The sense of nature, and natural, organic process, that comes out of these poems is very strong. 'The Force That Through the Green Fuse Drives the Flower', for instance, is about decomposition in the grave––and a returning to nature: a renewal, in another form; death as a part of the life cycle, the process of living. 'The force that through the green fuse drives the flower', the speaker says, 'Drives my green age'(ll.1–2). The same force that drives all things, drives us. And while, in death, 'I am dumb to tell the crooked rose'(or hanging man, or weather's wind(l.4,14,19)) that we are like them, nonetheless, we are like them. This primal feel of natural process comes out in 'After the Funeral',('Ann,/ Whose hooded, fountain heart once fell in puddles/ Round the parched worlds of Wales'(ll.12–4)), and 'A Refusal to Mourn the Death, by Fire, of a Child in London': 'After the first death, there is no other'(l.24). But perhaps it is most clear in 'Poem on His Birthday'. While the speaker mourns his thirty–fifth birthday, and being that much closer to death, he observes nature. He sees 'flounders, gulls, on their cold, dying trails'(l.11), 'finches fly(ing)/ In the claw tracks of hawks'(l.20–1), and 'The rippled seals streak down/ To kill'(ll.34–5). This death is all part of living––the last part we are aware of, but not the final part: our bodies are still part of life's process. As 'And Death Shall Have No Dominion' says, 'Dead men naked they shall be one/ With the man in the wind and the west moon;/ When their bones are picked clean and the clean bones gone,/ They shall have stars at elbow and foot'(ll.2–5). Man continues, in death, to be exactly what he was in life––a part of nature. Yet Thomas, while seeing the naturalness of death, and not fearing it, does his most to affirm this life of 'four elements and five/ Senses, and a man a spirit in love'('Poem on His Birthday', ll.82–3). Not being afraid of death is not the same as wanting to die, or even waiting to die. No, the awareness of death is only another reason to live, and to live as much, as fully, and as long as we can, just as the speaker in 'Poem on His Birthday' finds life more intense as he approaches death. 'Do Not go Gentle into that Good Night', however, is the best example of Thomas's affirmation of life. While the night––death––is specifically called good, it is still something to be fought. 'Old age should burn and rave at close of day'(l.2), even 'Though wise men at their end know dark is right'(l.4). Another, more obviously buoyant factor in Thomas's poetry is religious faith. If, as in 'After the Funeral', 'I know her scrubbed and sour humble hands/ Lie with religion in their cramp'(ll.30–1), there is no need to mourn Ann's fate, nor any need for her to have been afraid. If there is a god, and one is on proper terms with that god, life is only keeping one from Heaven. Thomas seems to acknowledge this hope for others, while unsure of it himself: in 'Elegy', he mourns that his father 'died/ Hating his God'(ll.22–3) and says 'that He and he will never go out of my mind'(19–20). Yet in 'Poem on His Birthday', 'he goes lost/ In the unknown, famous light of great/ And fabulous, dear God'(ll.46–8); lost, thinking of 'Heaven that never was/ Nor will be ever is always true'(ll.50–1). Yet in lines sixty–five and six, he prays. This seems to be a conflict between rational scepticism and faith, giving faith a new strength, and the poem a sense of hope. This sense of hope, arising from belief in something beyond death, coupled with the naturalness and rightness of dying, makes Thomas's work optimistic. In accepting death as the natural consequence of life, and celebrating life itself all the more because it will end, he makes death itself into something that gives life value. Even if we cannot accept his religious faith, we can still take heart in this.

And So What

Larkin affirms the absurdity of life by resigning himself to death, yet he never takes the next step. Camus does take this step, by granting that life is absurd, but maintaining that it has whatever value and meaning we choose to give it. There is nothing outside of the self, this life––and nothing beyond it to give it meaning. But the self is free to assign it value and meaning, just the same. Thomas does grant life this value. By seeing life as process, he grants it an intrinsic value, and thus never comes to the question of absurdity. He acknowledges the intrinsic value of sensation, and tells us to live, because only in life will we have sensation. Thomas's view comes out of a much more traditional approach to life, one which comes from dependence on the cycles of nature and life and death for survival––the farm. Larkin's, on the other hand, is urban and industrial; a no–god–and–science–can't–save–us view, which captures the way many of us, raised in the city and not in the church, react when confronted with death. Having no god to give them meaning externally, and acutely aware of their own meaninglessness, these lives are naturally more pessimistic. Not being grounded in natural processes, they are likewise much more afraid of these processes. In this sense, Larkin's view is more modern than Thomas's, and in this case, the change has not been for the better.