Excerpts from "'But what am I, then?' Chasing Personal Essences Across National Literatures"—

part I of Strange Concepts and the Stories They Make Possible: Cognition, Culture, Narrative

(Johns Hopkins University Press, 2008), pp. 1-30 and 37-42

 

 

 

"But what am I, then?" Chasing Personal Essences Across National Literatures

 

1. Ural Mountains-Rome-London

 

            I grew up in a smog-coated industrial town in the Ural Mountains in what used to be the Soviet Union, and now I teach Restoration and eighteenth-century English literature at the University of Kentucky. It is not often that the images from my childhood in provincial Russia intersect with the images from the world of the late seventeenth- and early-eighteenth-century English stage, with their tart political references, powdered wigs, and riot-prone audiences. So when they do intersect, I pay attention.

 

            I was talking with my students about Amphitryon; or, The Two Sosias (1690), a brilliant and underappreciated farce by John Dryden (1631-1700), when something about the adventures of one of its title characters, Sosia, made me think of a Russian children's poem that I didn't even know I remembered. I certainly hadn't thought about it for twenty-five years, yet here it was, relentlessly unfolding in my mind and (I felt) interfering with my teaching. For what good would it do to foist on these English majors my Russian-speaking memory? What larger point about Dryden, or Amphitryon, or drama, or cross-cultural narrative patterns would it allow me to make? I stifled the recollection and went on with my class.[1]

 

            This happened several years ago, and I have since then figured out why I was so struck by the similarity of the two pieces. The play and the poem both contain the same curious incident (indeed, the poem contains nothing else), in which a character is persuaded that if somebody looks exactly like him, or even just wears his clothing, it must be him. Those persuasion scenes are quite funny, but what I find peculiar is that both authors seem to take for granted a certain philosophical stance on the part of their audience: the assumption that one's true identity (his or her "essence," if you will) goes beyond the sum of that person's attributes and actions.

 

           One may argue that in attributing such a stance to the theatergoers in 1690, Dryden counted on their previous exposure to the relevant ideas of such thinkers as Rene Descartes (1596-1650) and Wilhelm Leibniz (1646-1716).[2] No such argument, however, can be made about the Russian preschooler whose intellectual horizons hardly extended beyond the politics of the sandbox and an abiding interest in big trucks. In other words, to get the joke of the poem, I had to be much smarter than I was. I was not, and yet I did get it.

 

           I am moving a bit fast here. In a moment I will slow down, go over the poem and the play in detail, and state in more explicit terms why their similarity is so problematic. I just want you to understand why I was so jolted by seeing the two incommensurable cultural environments sharing with such an apparent casualness a complex philosophical assumption. That jolt has made me seek conceptual frameworks that allow for the interplay of the culture specific and the cross-cultural. I have found one such framework in the research of developmental psychologists and cognitive evolutionary anthropologists who study essentialist thinking in young children. Their work has far-reaching implications for all fields of humanistic inquiry, though, of course, I personally am particularly excited by what it means for me and my colleagues in literary criticism who want to understand how works of fiction affect readers.

 

           And to me this research suggests that when we study effects of a given cultural representation, we should inquire, among other things, into the ways it engages our evolved cognitive capacities: builds on them and experiments with them. In this book, I use this approach to deal primarily with fictional narratives as well as with a small selection of movies and surrealist artwork. I hope, however, that as you follow my argument you will think of your own examples of the phenomena that I am discussing, and not just literary, cinematographic, and artistic but also those coming from our everyday interactions with our world.

 

            So let us begin with the Russian poem. Or rather the poem I read it in Russian but that must have been translated from Polish. Its author, Julian Tuvim (1894-1953), was well known in pre-war Poland for his lyrical poems, funny sketches, and political pamphlets. From 1942 to 1946, he lived in the U.S., where he published his famous essay "We are Polish Jews." In 1946, he returned to Poland to inter the ashes of his mother, murdered by the Nazis. His lighthearted children's verses were broadly reprinted and anthologized in Russia in the 1970s-1980s (and still are, as a quick internet search tells me). His ironic aphorisms—"God knows what is going on nowadays! People who had never died before, suddenly started dying"—floated around, delighting readers attuned to this kind of sensibility.[3]

 

At six years of age I did not know any of it, and even if I had, I would not have been able to appreciate it.[4] I must have liked the poem, though, because I immediately learned it by heart. I would like to believe that I actually remember today my feelings on that occasion, and so (if you let me get away with it) I can say that I still remember with what amused superiority I considered its protagonist, a young boy named Yurgan, who wakes up one morning and realizes that he cannot "find Yurgan."

 

Note that Tuvim does not say that "Yurgan cannot find himself," or something along these lines, because that would complicate his little narrative with the more "adult" overtones of existential angst and difficulties of self-realization, and Tuvim does not seem to be interested in that. His Yurgan quite literally cannot find the physical entity known as Yurgan. He searches the bed, he looks around the house—nothing. He runs into the street where he meets a neighbor, to whom he complains that he "cannot find Yurgan." The neighbor replies, "Look here, you odd fellow, you are wearing Yurgan's jacket! See that torn pocket? You yourself are Yurgan!" The poem ends here with the boy happy about finally locating Yurgan. Thus the apparently absurd initial problem is resolved through winding up absurdity to a higher pitch: whoever wears Yurgan's jacket must be Yurgan.

 

            Let's now go back in time even further than late-seventeenth-century England—to Rome in 200 BC. Flush with their victories in the Second Punic War, theatergoers watch Plautus's Amphitryon, a tragicomedy that takes the old Greek legend of the conception of Hercules, peppers it with martial allusions, and introduces an important new character, a chatty slave named Sosia. Like Tuvim's protagonist some twenty-one hundred years later, Sosia also comes to believe that he has "lost" himself. But whereas Yurgan checks for his missing self in bed and around the house, Sosia suspects that he left himself "at the harbor"—a grownup version of Yurgan, whose world has significantly expanded, offering more places in which to "lose" himself, but whose reasoning powers seem to have remained peculiarly narrow and thus vulnerable to outlandish suggestions.

 

Here is how Plautus's Sosia is led to think that he "forgot" himself somewhere else. The Theban general, Amphitryon, has been away from his wife, Alcmena, fighting the army of Teleboeans. The night he is supposed to return home, victorious, Jupiter assumes his shape and seduces Alcmena, impregnating her with Hercules. While Jupiter is busy with Alcmena, he employs another god, Mercury, to keep away Amphitryon's slave—Sosia—who is coming to inform Alcmena about the approach of her husband. Mercury mimics Jupiter's feat of impersonation. He takes on the form of Sosia and tells the real Sosia that he is an impostor who has no business hanging around Amphitryon's household. Confounded, the real Sosia begins to mull over the arguments of his assertive double:

 

Sosia. God help me, now that I look, he has my features—

I've seen them in the mirror. We could be twins.

My hat, my clothes—he's more like me than I am:

Leg, foot, height, haircut, eyes, nose, even lips—

Jaws, chin, beard, neck—no difference.

 

Mercury substantiates his claims to Sosia's identity with heavy beatings, and Sosia is finally persuaded:

 

Sosia. Don't, don't—I'll go . . . Gods, help me keep my wits:

Where was I lost, transformed, made someone else?

Did I forget and leave me at the harbor?

He's moved in on my self.[5]

 

Plautus's play, with its ingenuous doubling of the original twins plot, spawned an "astonishingly long line of  . . . dramatic development."[6] Its adaptations include the anonymous English translation of the early 1600s, The Birthe of Hercules; Jean Rotrou's (1638), Moliere's (1668), Dryden's (1690), Johann Daniel Falk's (1804) and Henrich von Kleist's (1806) respective Amphitryons; Jean Giraudoux's Amphithryon 38 (1929); Georg Kaiser's Amphitryon Doubled (1944); and others. Sometimes only parts from the original play have been used, such as in Shakespeare's The Comedy of Errors (ca. 1594) based primarily on Plautus's Menaechmi but borrowing the scene in which Amphitryon is denied entrance into his own house (because Jupiter is already inside and is taken for the "real" Amphitryon). Shakespeare portrays Antipholus of Ephesus raging by the barred door of the house he shares with his wife Adriana, while the twin servants, both named Dromio (and reminiscent of Sosia and Mercury) are enthusiastically contributing to the ruckus. 

 

Now we are back in England in 1690. Dryden's play, considered by Earl Miner "one of the unrecognized masterpieces of English comedy,"[7] is based on Rotrou's and Moliere's Amphitryons, but it departs from them significantly, expanding, in particular, the theme of the comic predicaments of Sosia—hence the new subtitle, "The Two Sosias." We learn from Judith Milhous and Robert D. Hume's exemplary reconstruction of late-seventeenth-century theatrical productions that in the first cast of Dryden's farce the role of Sosia was played by James Nokes. Here is the description of Nokes's typical demeanor in a comic role, provided by Colley Cibber (who joined the Theatre Royal in 1690 and was to become in time a prominent theatrical manager, playwright, and England's poet-laureate):

 

The lauder the Laugh the graver was his Look upon it; and sure, the ridiculous Solemnity of his Features were [sic] enough to set a whole Bench of Bishops into a Titter . . . . In the ludicrous Distresses which, by the Laws of Comedy, Folly is often involv'd in, he sunk into such a mixture of piteous Pusillanimity and a Consternation so ruefully ridiculous and inconsolable, that when he had shook you to a Fatigue of Laughter it became a moot point whether you ought not to have pity'd him. When he debated any matter by himself, he would shut up his Mouth with a dumb studious Powt, and roll his full Eye into such a vacant Amazement, such a palpable Ignorance of what to think of it, that his silent Perplexity (which would sometimes hold him several Minutes) gave your Imagination as full a Content as the most absurd thing he could say upon it.[8]

 

We can only imagine with what delicious "mixture of piteous Pusillanimity and Consternation" Dryden's Sosia, as played by Nokes, "debated . . . by himself" who he was or was not, all the while surveying surreptitiously the pugnacious claimant to his identity:

 

Sosia. He is damnable like me, that's certain. Imprimis: there's the patch upon my nose, with a pox to him. Item: a very foolish face with a long chin at end on't. Item: one pair of shambling legs with two splay feet belonging to them. And, summa totalis: from head to foot, all my bodily apparel.

 

Faced with such compelling evidence, Dryden's Sosia has no choice but to admit sorrowfully that this stranger is himself—"there is no denying it." "But what am I, then?" asks the befuddled character as he feels his identity slipping away from him: "For my mind gives me I am somebody still, if I knew but who I were."[9]

 

What makes the situation funny in all these cases is the naivete of the protagonists. They are willing to accept that their identities are defined by external characteristics, such as a jacket with a torn pocket, the shape of the "leg" and "foot," "height" and "haircut," or a "patch upon [the] nose" covering a venereal sore. We find the attitude of Tuvim's Yurgan as well as of Plautus's and Dryden's respective Sosias amusing (the latter additionally set off by Nokes's brilliant acting) because it is clear to us that they really should know better.

 

But—and here I pose a seemingly simple question, on whose deceptive simplicity this book's argument will turn—on what grounds should they know better? What is it that I knew as a preschooler, and the readers of Plautus were aware of two thousand years ago, and the Restoration audiences immediately recognized as belonging to the jurisprudence of "the Laws of Comedy," but that Yurgan and Sosia so spectacularly fail to grasp?

 

To answer this question I need a conceptual framework that can do two things simultaneously. On the one hand, it should allow me to cut across disparate historical settings—for how much really does a child from twentieth-century provincial Russia have in common with an adult theatergoer in second century BC Rome or in late-seventeenth-century London? On the other hand, it should be sensitive to historical differences, so as not to reduce the cultural meaning of one representation to another.[10] For, clearly, the intended and actual effects (whatever they were) of Sosia's ridiculous reasoning on Dryden's original audience were intertwined with political and aesthetic idiosyncrasies of the 1690s. And, clearly, learning about these effects will explain next to nothing about my reaction to Tuvim's poem or about Plautus's audience's reaction to "their" Sosia—we will need to study those in their specific cultural contexts.

 

In other words, I need a framework that can handle cognitive similarities and cultural differences and, in the long run, help to explain how the two interact. It is with such requirements in mind that I turn to the framework offered by recent cognitive evolutionary research on children's essentialism, although I invite my readers to disagree with me and come up with an alternative, if only because "my" explanation involves the liability of a many-page theoretical buildup from a discipline relatively unfamiliar to literary critics.[11]

 

And here it comes.

 

 

2. Essentialism, Functionalism, and Cognitive Psychology

 

There are different ways of defining essentialism. Literary critic Diana Fuss calls it "a belief in the real, true essence of things, the invariable and fixed properties which define the 'whatness' of a given entity."[12] Note the term entity, which implies that, in principle, we can approach anything—from chair to rose and from Dryden to quasar—with a "belief" that there is something "invariable and fixed" about each, something that makes it/him what it is and not something else: the chairness of a chair, the Drydeness of Dryden, the quasarness of a quasar.[13]

 

Cognitive evolutionary psychologists and anthropologists agree that essentialism is a "belief." That is, it describes the way we perceive various entities in the world and not necessarily the way they really are. At the same time, some of them suggest that there are important differences between our essentializing of artifacts and of natural kinds (such as people, plants, and animals). For example, if I ask you what constitutes the essence of a chair, you will likely bring up the chair's function, saying that we sit on it or that it is made to sit on. In contrast, if I ask you what constitutes the essence of Dryden, we may have a much longer discussion about his appearance, parentage, occupation, and politics, and never settle on any one decisive feature or quality, even if we both agree that there was something special about John Dryden that made him John Dryden and not, say, John Milton (another seventeenth-century poet).

 

In other words, essences of natural kinds seem elusive, in fact, strikingly so when we contrast them with essences of artifacts, which are generally synonymous with their functions. So, to modify Fuss's original definition, we may say that our belief in "the invariable and fixed properties which define the 'whatness' of a given entity" is quite different depending on whether this entity is a natural kind or an artifact.

 

Moreover, it's been suggested that this difference is grounded in the evolutionary history of our species: Ascribing functions to artifacts and ungraspable yet enduring essences to plants, animals, and people might have had an adaptive edge in the Pleistocene and as such remained part of our far-from-ideal cognitive makeup. Scott Atran advances this argument in his Cognitive Foundations of Natural History: Toward an Anthropology of Science (1990), a study that brings together research in folk taxonomies from around the world, data from developmental psychology, and the history of science and philosophy. I return to the question of the possible evolutionary history of essentialist thinking in the next section; now we shall focus on research in children's cognitive development.

 

It turns out that already by four years of age, children think of natural kinds primarily in terms of their underlying, largely immutable, and invisible essences and of artifacts primarily in terms of their functions.[14] Psychologists view such tendencies as "plausibly innate (but maturing)," that is, whereas environmental input is crucial for them, it does not fully define their development. For example, it "is not the case that essentialism results from the particular cultural milieu of the typical experimental subject (middle-class, educated, U.S.)"; it is not the case that essentialism "can be deduced by language use"; and it is not the case that children simply "learn" essentialist thinking from their parents. (To see how much of it they may indeed learn from their caretakers, see Susan A. Gelman et al., Mother-Child Conversations About Gender: Understanding the Acquisition of Essentialist Beliefs [2004].)[15]

 

The tendency to essentialize natural kinds does not disappear as children grow up, but rather expands and diversifies its application. In fact, it is this tendency that has allowed people all over the world to develop and maintain for thousands of years complicated systems of folk taxonomies, paradoxically both making possible the advent of contemporary scientific taxonomy and also holding science back. It held science back by exaggerating, in Ernst Mayr's words, the "constancy of taxa and the sharpness of the gaps separating them."[16] That is, we know now that species constantly change into one another, but our cognitive tendency to essentialize natural kinds makes the realization that species pointedly lack essences an uphill battle all the way. After all, it comes so much easier to us to think along the lines of Rousseau's Emile (1762) and imagine the "insurmountable barrier that nature set between the various species, so that they would not be confounded."[17]

 

The experiments that investigate children's capacity for essentializing often consist of a series of transformations inflicted on an animal or artifact. Since the time of publication of Atran's book, such experiments have been replicated numerous times in a broad variety of cultures, and they have become increasingly probing and complex. In a traditional experimental set-up, a child is presented with a toy animal or a picture of an animal wearing a very convincing "costume" of a different species or with some body parts missing or altered.[18] When asked to comment on the species of the hybrid animal, such as a skunk altered to look like a zebra, even three-year-old children judge a skunk to be a skunk. The skunk seems to retain that underlying "skunkness" that makes it different from other animals. A tiger without legs is still a tiger, not a new species of animal. As Paul Bloom observes in Descartes' Baby (2004),  

 

In a child's mind, to be a specific animal is more than to have a certain appearance, it is to have a certain internal structure. It is only when the transformations are described as changing the innards of the animals—presumably their essences—that children, like adults, take them as changing the type of animal itself.[19]

 

To illustrate very briefly how the same belief in essence-conferring innards continues influencing our thinking as we grow up, consider the following: the Soil Association—Britain's leading organization responsible for certifying foods as organic—has a rule that if a certain pesticide (Bt) is present within the plant "as the result of genetic modification," the plant is not organic. If, however, the same pesticide is sprayed on the plant with the same goal of protecting it from insect pests, the plant still qualifies as organic. The distinction makes no scientific sense, so it seems that in this case the Soil Association's guidelines reflect primarily our essentialist biases.[20]

 

Back to children and their insistence that some essential "skunkness" survives the skunk's transformation into a zebra. (Insistence inferred from their responses, that is, not explicitly acknowledged as such: neither children nor adults participating in various transformation experiments speak of essences or are expected to "believe that they know what these essences are.")[21] The implicit belief in essences of animals does not quite translate into a parallel belief in essences of artifacts. A log, for example, is not perceived as having any specific quality of "logness" about it; in fact, it seems to change its "identity" quite often: depending on its current function, it can be perceived as firewood, a bench, or a battering ram.[22] Another artifact, a cup with a sawed-off bottom, becomes a bracelet or a cookie-cutter—there seems to be nothing that is perceived as intrinsically "cupful" about it.

 

The experiments have demonstrated again and again that "transformations that radically change the appearance of an object result in judgments of category change for artifacts but stability for animals, implying that animals—but not artifacts—retain some essential qualities that persist despite external appearance changes."[23] As Bloom puts is (building on earlier research of Frank Keil),

 

[If] one carefully reshapes [a coffee pot], adds and removes parts, so that it can feed birds and looks like a bird feeder, both adults and children would say that it is no longer a coffee pot, it is a bird feeder . . . Similarly, houses can become churches, computer monitors can become fish tanks, and swords can become plowshares.[24]

 

There is some disagreement among cognitive scientists about the degree to which we essentialize natural kinds as opposed to artifacts. Atran, Keil, and Gelman would mostly say that "essentialist theory is not usually extended to artifacts." That is, one might talk about the "deep essence" presumably shared by dogs, "but one could not normally study chairs to find out what really makes them chairs." In contrast, Bloom proposes that at least on some level, "the mental representation of artifact kinds is quite similar in structure to that of natural kinds."[25] The discussion of such similarities centers on the issue of intentionality, which is not surprising, given that when we start speaking of intentions behind the making of artifacts we are moving over to the domain of strongly essentialized entities—living beings.

 

Thus, although we do think of artifacts in terms of their functions—and clearly much more so than when we do of natural kinds—"having the right function is neither sufficient nor necessary for something to be a member of an artifact kind." As Lance J. Rips has demonstrated, "function is not the sole criterion for classifying artifacts"—just as important are "the intentions of the designer who produced it."[26] For example, say you take an umbrella and refurbish it as a lampshade, adding "a pale pink satin covering to an outside surface, gathering it on top and at the bottom so that it has pleats," attaching "a satin fringe . . . around the bottom edge," and attaching "to the inside of the dome at the top a circular frame that at its center holds a light bulb." Then you still use this object to protect you from the rain. What kind of object is it?

 

In the experiments reported by Rips, the adult subjects classified this new "lampshade" as an umbrella if—and apparently because—it was initially designed as an umbrella. However, specifying the intentions of the object's designer differently "completely changed the way subjects classified the object, even though [it] still resembled members of the alternative category. If [the object is designed] as a lampshade, then it is a lampshade despite looking much like an umbrella." In other words, though very much centering on their functions, "the essential properties of artifacts lie in the intentions of their designers."[27]

 

To further show how intentionality complicates the view that artifacts are perceived primarily in terms of their functions, Bloom makes the two following points. First, sometimes you do not even have to transform the appearance of the artifact to change its category: all you need is a socially recognizable intention to treat the artifact as a member of a different category. Thus, whereas many "artifacts do need to be made from scratch given the sorts of functions they typically fulfill" (say, VCRs), "certain artifacts can be created out of members of different artifact kinds or even natural kinds. Such cases include kinds like weapon, pawn, toy, paperweight, target, landmark, coin, and artwork."[28]

 

Second, when an artifact is damaged and so cannot be used anymore in accordance with its function, it can still be perceived as belonging to its original category if "its current appearance and potential use are best explained as resulting from the intention to create a member of [this] artifact kind":

 

A clock that needs to be rewound or that does not work because it has been gently hit by a hammer is still viewed as a clock, because the details of its structure are such that the best explanation for how it came into existence is through the intention to create a clock. This is the case even if it can never be repaired. A pile of dust created by hitting a clock very hard with a hammer is not judged to be a clock since its current appearance and use are not best explained in terms of the desire to create a clock. Note that the question is not whether one is aware of the intention (by that standard, it would be impossible for something that one knows to have been created to be a clock to ever stop being a clock, which is plainly not our intuition); it is whether the current status of the entity is consistent with this intention.

 

So, to bring these two points together, we "infer that broken and unreparable objects can remain members of an artifact kind so long as they are still identifiable as being created with the intention to belong to that kind (as with a broken clock) and that an object can become a member of an artifact category solely through the intention of the creator (as when a penny becomes a pawn)."[29]

 

            I shall return to our perception of artifacts in section 5  ("Talk to the Door Politely or Tickle It in Exactly the Right Place"), where I introduce the notion of conceptual hybrids—artifacts with lively human personalities. Let me conclude this section by reiterating three precepts that the cognitive scientists whose work I discuss in this chapter hold in common even if they disagree on certain details.

 

First, to quote Bloom again, "it is unlikely that a single procedure underlies our intuitions about identity for all types of individuals. More plausibly, our intuitions about the identity conditions for entities like chairs [i.e., artifacts] are going to be quite different from our intuitions about people [i.e., natural kinds]."[30]

 

Second, to turn to Gelman, the claim that we perceive natural kinds in terms of their essences is "not a metaphysical claim about the structure of the world but rather a psychological claim about people's implicit assumptions."[31] What this means is that our tendency to essentialize does not testify to the actual presence of any underlying essences; instead, its cross-cultural prevalence reflects the particularities of the cognitive make-up of our species.[32]

 

Approached as a psychological rather than metaphysical phenomenon, essentialism emerges as "sketchy and implicit—a belief that category has a core, without knowing what that core is"—a "placeholder" notion:

 

[People may] implicitly assume, for example, that there is some quality that bears have in common that confers category identity and causes their identifiable surface features, and they may use this belief to guide inductive inferences and produce explanations—without being able to identify any feature or trait as the bear essence. This belief can be considered an unarticulated heuristic rather than a detailed, well-worked-out theory.[33]

 

Hence in the rest of this book, when I use the term "essentialism" (or "essentialist thinking," or "essentializing"), I mean psychological essentialism: a hazy belief rather than well-thought-through theory, which influences our everyday thinking primarily about natural kinds (as contrasted with artifacts), and which can be reinforced or weakened by specific contexts.

 

 

Third, the primary conceptual benefit of essentialist thinking is that it allows us to make novel inferences about previously unencountered entities. The real-life consequences of making such inferences, however, are just as often not beneficial at all—they can be harmful, neutral, or so context-dependent that it is impossible to judge their value.

 

For example, if I encounter a huge animal with sharp teeth, I will not spend much time trying to figure out its intentions. Instead, I will try to run away from it or hide because I intuitively assume that it might be in the nature of this particular specimen (as it is in the nature of some other big animals with sharp teeth) to prey on smaller mammals, such as myself. In this particular situation, my quick inferencing might be lifesaving. (It might also be silly, depending on what that animal really is, but it is better not to have taken chances.)

 

To use a different example, a member of a college job search committee faced with a hundred fifty applications for assistant professorship in her department (not an uncommon situation in some humanities programs) may decide to pay more attention to the resumes of people who defended their dissertations at Ivy League schools because she thinks that Ivy League candidates are better on the whole than candidates from other schools. Thus picking up a resume of a random Ivy Leaguer, she feels already armed with some novel inferences about that specific person—she believes that he will work harder, publish faster, and think in a more original way than a random candidate from another pile. (As one scholar put it recently, "It is well known that a person with a PhD or even ABD [all but dissertation] from some programs is far more likely to be interviewed than someone with an equal or even better publication record from other programs.")[34] This essentializing thus may exclude from her consideration some excellent candidates and it does not guarantee that the chosen candidate will indeed be all that she hoped for, but this is the risk she has chosen to take to save her time and energy.

 

You can see now how various kinds of stereotyping can build on our essentialist proclivity to make novel inferences about strangers based on limited information about them.[35] We may feel as we engage in such inferencing that we are doing very well—that we are simplifying and making manageable our overwhelmingly complex social world—when, in fact, we are severely limiting our mental and social horizons.

 

The issue of the relationship between stereotyping and essentializing is extremely complex, and it often hinges on the question of whether or not we tend to essentialize human social groups. It's been suggested, for example, that "children spontaneously explore the social world around them in search of intrinsic human kinds or groups of individuals that are thought to bear some deep and enduring commonality," and that different "cultures inscribe the social environment with different human kinds."[36] In one context, those kinds will be predicated on "common somatic features," in another, "on common occupational destinies," yet in another, on "sexual differences."[37] Cognitive evolutionary psychologists and anthropologists continue to debate this model; for some illuminating studies, see Lawrence A. Hirschfeld's Race in the Making: Cognition, Culture, and the Child's Construction of Human Kinds (1996), Robert O. Kurzban, John Tooby, and Leda Cosmides's "Can Race Be Erased? Coalitional Computation and Social Categorization" (2001), and Rita Astuti, Greg E. A. Solomon, and Susan Carey's Constraints on Conceptual Development: A Case Study of the Acquisition of Folkbiological and Folksociological Knowledge in Madagascar (2004).[38]

 

Whereas it is beyond the scope of my study to engage with their arguments (I return to the problem of group essences only briefly in Part II, in the section entitled "Made to Pray"), I believe that by focusing on cognitive construction of essences these arguments can provide a powerful interdisciplinary boost for cultural critics examining the workings of our social institutions and ideological formations. For what these studies do is offer a series of crucial insights into the ways we build on our cognitive biases and shortcuts to construct our everyday world, which we then rationalize as "natural."[39]

 

 

3. Possible Evolutionary Origins of Essentialist Thinking

 

Why do we perceive natural kinds in terms of essences? Cognitive psychologists and anthropologists offer several different (though not mutually exclusive) hypotheses about the evolutionary history of essentialism. The disagreement is around the question of whether essentialism is strongly associated with the domain of folk biology (the view espoused by Atran) or if it "emerges out of a set of domain-general tendencies" (the view endorsed by Gelman).[40]

 

Atran suggests that a "cross-cultural predisposition (and possibly innate predisposition) to think [about the organic world in terms of underlying essences] is perhaps partly accounted for in evolutionary terms by the empirical adequacy that presumption of essence afforded to human beings in dealing with local [flora and fauna]. Such a presumption "underpins the taxonomic stability of organic phenomenal types despite variation among individual exemplars."[41] In other words, our Pleistocene ancestors could make certain inferences about every new (previously unencountered) organic specimen if they could recognize it as belonging to a certain category. For example, it would make sense to be wary of any tiger—not just the one that ate your cousin yesterday—because it is in the "nature" of tigers to prey on humans.

 

Moreover, a tiger with three legs would still be perceived as a tiger, not a new three-legged species of animal with unknown properties, because it is in the "nature" of tigers to have four legs and the exception on hand testifies only to the peculiar personal history of this particular exemplar.[42] Or, to turn this point around, perhaps even the fact that different tigers look different strengthens our intuition that "there is something deeper" that they have in common: something non-obvious that they all share and that "holds them together" as one species.[43]

 

According to Atran, for something like two million years such essentialist thinking might have served as a cognitive "shortcut," one that was instrumental in helping our ancestors to orient themselves amid the bewildering variety of natural kinds, including poisonous plants and predators.[44] The attribution of imagined essences was useful for categorization and thus contributed to the survival of the human species. As such it was selected for in thousands of consecutive generations and became a part of our permanent cognitive makeup.

 

In The Essential Child (2003), Gelman argues that our tendency to essentialize might be associated with several cognitive domains and not just with the domain of folk biology (which deals with flora and fauna). As she puts it, "the cognitive capacities that give rise to essentialism are a varied assortment of abilities that emerged for other purposes but inevitably converge in essentialism." Gelman lists a series of independent cognitive capacities, such as the "basic capacity to distinguish appearance from reality," the capacities for "causal determinism" and for "tracking identity over time," that "individually and jointly" may lead to essentialism. Viewed as an outcome of several fundamental cognitive processes, "essentialism is something we do neither because it is Ôgood' for survival, nor because it is Ôbad' for people who are manipulated by essentialist rhetoric. Essentialism is something that we as humans cannot help but do."[45]

 

Note that Atran and Gelman agree on the larger point of seeing essentialism as a "side effect" of other evolved cognitive adaptations. Their difference mainly lies in locating the "proper [that is, original] domain" for essentialist thinking. Atran sees it in folk-biological taxonomies, whereas Gelman finds it in a cluster of cognitive biases not necessarily tied to folk biology.[46]

 

 

4. "A bullet's a bullet's a bullet!"

 

Putting aside the issue of the evolutionary history of essentialism, let us revisit the claim that "all and only living kinds are conceived as physical sorts whose intrinsic Ônatures' are presumed, even if unknown."[47] Different cognitive psychologists may subscribe to stronger or weaker versions of this claim, but they all seem to agree with its key implication: the set of inference procedures used to deal with living things must be quite different from that for dealing with artifacts.

 

But if this is the case, how do we account for the fact that in our everyday lives, we (both children and adults) regularly engage in a broad range of domain-crossing attributions, imputing essences to artifacts (for example, to works of art) and viewing various natural kinds in terms of their functions?[48] Such mental operations are clearly crucial for configuring our relationship with our world. Our language itself is quite sensitive to our domain-crossing tendencies, supplying us with such terms as objectification, commodification, anthropomorphization, personal teleology, and fetishization. (Although it also could be argued that the reason we have such terms is that we need some special vocabulary for describing less habitual cognitive operations. After all, we do not use the term objectification to describe our perception of an artifact—for of course we objectify artifacts.)[49]

 

But, to repeat the question, if we can objectify living kinds (by attributing functions to them) and essentialize artifacts (by attributing essences to them), why insist on the difference between the inferential processes involved in dealing with living kinds and artifacts?

 

Gelman's answer to this is that although essentialized artifacts "will support some novel inferences," these inferences are "quite limited compared to those of natural kind categories." For example, "artifacts can participate in rich causal theories, including those in archaeology and cultural studies . . . , but such theories concern interactions between the object and the larger world, not properties intrinsic to the artifacts themselves."[50]

 

To clarify: the inferences that we make about some essential characteristics of a stranger after having placed him into a certain category concern his personal features. For example—to turn to tigers again—I assume that because tigers prey on humans, this particular tiger might attack me. To put it in more explicitly essentialist terms (something that we do not do in our everyday reasoning—for this is a hidden, if crucial, conceptual step), there is something ineffable, invisible, that all tigers have in common.[51] That essential "tigerness" makes it very likely that this particular tiger will behave in a certain way when confronted with me (especially if it is hungry).

 

In contrast, consider an ancient Roman coin unearthed at an archeological site. We may perceive this coin to be very special. Its long history seems to lend it that ineffable, invisible something that the most meticulously wrought modern copy would not possess. And yet that special something—this degree of essence—is not something intrinsic to such coins in general and this coin in particular. Instead it pertains to the coin's history of having participated in the complex social and cultural networks of a long-gone civilization. In the above case of the tiger, we focus on the tiger itself; in the case of the coin, we focus on what the coin represents—the lost social world.

 

Or think of the first conversation between the nine-year-old Oskar Schell and the hundred-year-old Mr. Black from Jonathan Safran Foer's novel Extremely Loud and Incredibly Close (2005). Oskar has been looking through the things that fill the old man's apartment: "the stuff he'd collected during the wars of his life":

 

There were books in foreign languages, and little statues, and scrolls with pretty paintings, and Coke cans from around the world, and a bunch of rocks on his fireplace mantel, although all of them were common. One fascinating thing was that each rock had a little piece of paper next to it that said where the rock came from, and when it came from, like, "Normandy, 6/19/44," "Hwach'on Dam, 4/09/51," and Dallas, 11/12/63." That was so fascinating, but one weird thing was that there were lots of bullets on the mantel, too, and they didn't have little pieces of paper next to them. I asked him how he knew which was which. "A bullet's a bullet's a bullet!" he said. But isn't a rock a rock?" I asked. He said, "Of course not!" I thought I understood him, but I wasn't positive.[52]

 

An important context for this strange conversation about rocks and bullets is that Oskar finds Mr. Black "amazing" and is "keeping a list in [his] head of things [he] could do to be more like him."[53] Hence the artifacts that surround Mr. Black are worthy of Oskar's attention because they can be perceived as bearing some mark of the old man's personality: To put it in the terms of our discussion, some of his essence has rubbed off on them.

 

Think now of Atran's observation that we hierarchize natural kinds but not artifacts and observe how Foer's text makes it possible for us to hierarchize both rocks and bullets and hence to argue about their relative importance in Mr. Black's personal past. We can say, for example, that rocks bear a stronger mark of Mr. Black's personality because they bring back the memories of places (as the narrative seems to imply); or we can disagree with this view and say that bullets provide a more intimate link to Mr. Black because they bring back the memories of life-and-death situations and some of them may have even been extracted from Mr. Black's own body. Hence one can argue that, unlike the rocks, bullets don't even need identifying tags: they are special on their own. Again, the fact that such an argument does not strike us as absurd demonstrates that the text has succeeded in essentializing these objects, at least to some degree.

 

Note though that in the final count there is nothing special about either rocks or bullets—they are important only in so far as they connect Oskar (and us) to Mr. Black. Were Foer so inclined, he could have built his little hierarchical narrative around other items on the fireplace mantel, say those "Coke cans from around the world," showing how each Coke can brings back memories of a specific episode in the old man's eventful life.

 

Compare this with the case of the excavated Roman coin. There is nothing special about it by itself—it could have been a shard—it is the degree and quality of its relationship with the person that we are interested in (say, an ancient Roman merchant) that imbue it with a unique significance. To repeat: if you essentialize a natural kind, you end up understanding something (whether correctly or not) about the particular specimen in front of you (e.g., a tiger); but if you essentialize an artifact, you end up understanding something not about this artifact (e.g., a bullet) but about the man who keeps this bullet on his mantel (Mr. Black). Hence Gelman's point that the range of inferences you can make about the artifact by essentializing it is much narrower than the range of inferences you can make about, say, a human being, by essentializing him or her.

 

 

5. Talk to the Door Politely or Tickle It in Exactly the Right Place

 

Here is another way to illustrate the point that, although we constantly and casually cross the conceptual domains of artifacts and living kinds, the inferences that we can draw from an essentialized artifact are peculiarly limited. Consider a veritable storehouse of such conceptual hybrids: J. K. Rowling's Harry Potter series. At one point in Harry Potter and the Chamber of Secrets (1998), a boy named Draco Malfoy flatters one of his teachers, Professor Snape, leaving the latter rather pleased.[54] Even if you have never read any Harry Potter books, you can immediately infer, based on my intentionally stripped-of-any-details account of this episode, that the man named Snape needs to breathe, eat, and sleep; that he can talk on various topics, and that he has a broad range of states of mind; that he may have a special relationship with a group of people that constitutes his family; that he can learn to knit, to play drums, and saute vegetables; and that he can do all these things not just on Tuesdays, or on special magical days, but every day.

 

This list may strike you as silly, but the fact that we never even bother to consciously articulate to ourselves these and thousands of other automatically assumed possible qualities of "Snape" only shows how integral—to the point of not even being consciously perceptible anymore—this inferential process is to our making sense of the world. In other words, when we hear that a human being is flattered, we use our essentializing capacity to assume, even if not consciously, thousands of things about this person because they are in the "nature" of human beings.[55]

 

Now think of the doors at Hogwarts, the school of witchcraft and wizardry attended by Harry Potter and his friends. Like the majority of other artifacts in this school, these doors are magical: they "wouldn't open unless you asked politely, or tickled them in exactly the right place."[56] They seem thus as appreciative of a certain form of flattery and attention as Professor Snape. And we clearly have no problems with imagining this strange object that is simultaneously a door and a creature capable of a very human emotion.[57] (It is also important, of course, that by the time we encounter this object we have been prepared to expect various kinds of conceptual hybrids—that is, we already know that it is that kind of story.)

 

Yet compare the range of inferences we can draw from Snape's susceptibility to flattery to that we might draw from the similar susceptibility of the doors at Hogwarts. Can we assume that these doors need to breathe and eat? And if they are capable of appreciating attention while hanging on the door hinges at Hogwarts, will they have the same capability at a different, nonmagical location?  Can they learn any of the things that human beings can? Do they feel any particular emotions toward their family? And who is the doors' family, anyway? The people who use them regularly? The people who make them? Other doors? The adjacent walls? Furniture made of the same tree?[58]

 

The reason that we have a difficult time answering these questions is that we know that it is in the "nature" of doors to provide or debar access to a different space, and that's it. Doors are largely defined by their current and/or intended function, and it is only because we are making some inferences about them that we would automatically make about a flattered human that we may be able to attribute certain thoughts, emotions, and desires to this particular set of doors. In other words, a few essentialism-enabled assumptions about human beings do rub off on the flattered doors, but only a few and even those under some duress.[59]

 

It is equally significant that no matter how many times we read about a door that appreciates politeness in a novel or no matter how many times we hear, for example, of a crying statue in real life, we will never assume that doors ordinarily can be influenced by flattery or that statues regularly cry. Quite a number of cultural representations thrive—that is, attract our attention and interest—precisely because they suggestively violate some sort of important conceptual "boundary" between an artifact and a natural kind, perhaps by forcing upon us the inferences that we generally are not prepared to associate with even a somewhat anthropomorphized artifact.

 

In Part II I will discuss the enduring fascination that we feel toward the ontological hybrids that look and act like human beings and yet have been literally "made," artifact-like, by their creators. We remain, I argue, perennially titillated by robots, cyborgs, and androids because they are brought into the world with a defined "function"—as artifacts usually are—and then rebel against or outgrow that function by seeming to acquire a complex world of human feelings and emotions. What is particularly remarkable about these representations is that they seem to retain their "out-of-the-ordinary" feel no matter how many times we have been exposed to similar conceptual curiosities and no matter how well we have been "prepared" for them by our everyday habit of casual anthropomorphizing.[60]

 

 

6. Resisting Essentialism

 

A broad array of cultural representations and discourses could thus be fruitfully considered as both exploiting and resisting our essentializing proclivities.  Since in the rest of this book I will deal primarily with representations that exploit them and experiment with them (i.e., novels, movies, and paintings), let me now touch briefly on discourses that explicitly resist them.

 

One such discourse is evolutionary theory. There is a long philosophical tradition (going back to Plato and Aristotle) to misinterpret biology "within an essentialist framework," which means perceiving species as discrete and discontinuous.[61] As Ernst Mayr points out in Populations, Species, and Evolution (1970), the "concepts of unchanging essences and of complete discontinuities between every eidos (type) and all others make genuine evolutionary thinking well-nigh impossible. . . . [The] essentialist philosophies of Plato and Aristotle are incompatible with evolutionary thinking," which focuses on populations instead of types:[62]

 

The assumptions of population thinking are diametrically opposed to those of the typologist. The populationist stresses the uniqueness of everything in the organic world. What is true for the human species, that no two individuals are alike, is equally true for all other species of animals and plants . . . All organisms and organic phenomena are composed of unique features and can be described collectively only in statistical terms. Individuals, or any kind of organic entities, form populations of which we can determine the arithmetic mean and the statistics of variation. Averages are merely statistical abstractions; only the individuals of which the populations are composed have reality. The ultimate conclusions of the population thinker and of the typologist are precisely the opposite. For the typologist, the type (eidos) is real and the variation is illusion, while for the populationist the type (average) is an abstraction and only the variation is real. No two ways of looking at nature could be more different.[63]

 

Mayr further observes that,

 

The replacement of typological thinking by population thinking is perhaps the greatest conceptual revolution that has taken place in biology. Many of the basic concepts of the synthetic theory, such as that of natural selection and that of the population, are meaningless for the typologist. Virtually every major controversy in the field of evolution has been between a typologist and a populationist. Even Darwin, who was more responsible than anyone else for the introduction of population thinking into biology, often slipped back into typological thinking, for instance in his discussions on varieties and species.[64]

 

That Darwin "often slipped back into typological thinking" is particularly striking—and testifies to the powerful grip of essentializing—given that Darwin did more than anybody else to ease the power of that grip on modern science. As Mayr reminds us,

 

Virtually all philosophers up to Darwin's time were essentialists. Whether they were realists or idealists, materialists or nominalists, they all saw species of organisms with the eye of an essentialist. They considered species as . . . defined by constant characteristics and sharply separated from one another by bridgeless gaps. The essentialist philosopher William Whewell stated categorically, ÔSpecies have a real existence in nature, and a transition from one to another does not exist' . . . For John Stuart Mill, species of organisms are natural kinds, . . . and Ôkinds are classes between

which there is an impassable barrier.'

 

What this means is that "Darwin could have never adopted natural selection as a major theory, even after he had arrived at the principle on a largely empiricist basis, if he had not rejected essentialism."[65] And the fact that even for him this rejection was apparently a continuous conceptual challenge (for otherwise why would he sometimes slip "back into typological thinking"?) makes it less surprising that today we continue to struggle with essentialist "assumptions about category immutability."[66]

 

For, as Gelman reminds us, on some level, it does feel counterintuitive to imagine that categories "transform from one to another, most notably in evolution."[67] This is why children—and in some cases even adults—have trouble appreciating evolutionary accounts of species origins:

 

What appears to be difficult is not the complexity of [the concepts of evolutionary theory], nor the scientific methods underlying the evidence, nor even the technical underpinnings of the work. Rather, even nontechnical concepts such as the following seem almost insurmountable: within-species variability, the lack of any single feature (either morphological or genetic) that is shared by all members of a species, and the lack of biological reality to Ôracial' groupings of people. These conceptual difficulties call into question whether true conceptual reorganization takes place, or whether instead we are looking at the coexistence of multiple frameworks. . . . Adults remain susceptible to less obvious but still potent essentialist assumptions. In other words, essentialism is not strictly a childhood construction. It is a framework for organizing our knowledge of the world, a framework that persists throughout life.[68]

 

Gelman's reasoning here raises a series of genuinely difficult issues. First of all, it is easy to misinterpret what she is saying and read into it the assertion that because essentialism "persists through life," its specific harmful consequences, such as gender or racial stereotyping, must also persist through life. Gelman, of course, points out repeatedly throughout her book that "instantiations of essentialism" are "culture specific. . . . Essentialism is a species-general, universal, inevitable mode of thought, . . . but the form that it takes varies specifically according to the culture at hand, with the basic notion of essentialism becoming elaborated in each culture's complex theories of nature and society."[69]

 

The very fact that essentialism informs very different cultural formations in every society already indicates that whereas our tendency to essentialize may be "inevitable," there is nothing inevitable or unchangeable about each specific instantiation of essentialism. On the contrary, it seems that by understanding how susceptible we are to the essentialist reasoning we can successfully "deconstruct" and demystify each instance of such reasoning and see it for what it is—a specific cultural construction parasitizing on a more general cognitive predisposition.

 

To add further nuance to the thorny issue of the "inevitability" of essentialism, we may remember that cognitive evolutionary psychologists have argued for some time now that we tend to essentialize abstract concepts themselves.[70] In other words, the very way in which we make sense of such terms as "inevitable" or "inherent" may hinder us from grasping their significance within the evolutionary framework.[71]

 

For example, there is a huge difference between saying that we may have numerous evolved cognitive predispositions that are indeed "inherent" and automatically assuming, as we often do, that the instantiation of these dispositions somehow defines, delimits, and predicts the actual-world outcomes. The latter is completely false, and cognitive evolutionary psychologists are very sensitive to this issue, even though this aspect of their research rarely makes it into the lurid (and heavily essentialist) newspaper accounts of what is or what is not "in our genes."

 

Philosopher Elizabeth Grosz captures this difference aptly when she calls "Darwin's gift to the humanities" the emerging new understanding that within the evolutionary framework, "being is transformed into becoming, essence is transformed into existence, the past and the present are superceded by the future." She further notes that the "sciences which study evolution – evolutionary biology for example – become irremediably linked to the unpredictable, the non-deterministic, the movement of virtuality rather than the predictable regularity that other sciences tended to seek."[72] It may have taken these sciences "more than two thousand years . . . to escape the paralyzing grip of essentialism" and the escape is far from complete, but it is possible.[73]

 

The vision of science escaping the grip of essentialism raises a larger issue about the possibility—or even necessity—of the rest of us escaping that grip once and for all. One useful way of approaching this issue is to keep in mind that the cognitive apparatus underlying essentialism is here to stay: "Evolutionary shifts are incredibly slow and gradual.  Most psychologists who make evolutionary arguments about human behavior assume that what we see now are adaptations to how humans lived hundreds of thousands of years ago (in the Pleistocene)."[74] This is to say that we can certainly gain a better understanding of how essentialism works (and what its evolutionary history might have been), and we may consciously confront and systematically eradicate essentialist reasoning in specific areas (e.g., science, social politics, personal prejudices), and we can analyze the ways in which works of fiction build on and play with our essentialist biases (as I do in the sections to come)—but while this greater understanding may affect some of our specific behaviors, it will not change the cognitive architecture that underlies essentialism.[75]

 

Nor is this all bad. Cognitive structures that underlie essentialism are not perfect, but then none of our cognition is perfect. Evolution, as John Tooby and Leda Cosmides frequently point out, didn't have a crystal ball.[76] It could not "know" what future challenges the adaptations that evolved to respond to the Pleistocene environments would have to respond to in the future, say, in a developed industrial society.[77] The adaptations that contributed, with statistical reliability, to the survival of human species for hundreds of thousands of years and thus became part of our permanent cognitive makeup profoundly structure our interaction with the world, but, as Gelman observes, they are not perfectly fine-tuned to the world, nor do they have to be:

 

It's good for us to be mostly right, but we don't have to be thoroughly, entirely right.  We obviously have a lot of reasoning heuristics and biases that work pretty well most of the time, but fail spectacularly in other instances. So I'm not troubled by the argument that we're essentialist even though the world doesn't contain essences.  [Moreover], one would expect to see evolutionary shifts only if they add to our inclusive fitness (i.e., reproductive success).  Again, it's not clear that abandoning essentialism would (on the whole, overall) be a more successful strategy than embracing it, particularly if one considers the cognitive costs involved in adopting a more refined approach.[78]

 

A literary-critical study is obviously not the right place to discuss in detail the cognitive costs of the hypothetical "abandoning" of essentialism. Still, specifically as a literary critic, I would like to point out just one aspect of that cost: fiction thrives on our misguided certainty that there is something essential about people that makes them what they are even if we can never quite nail that crucial "something" down. Mismatches, gaps, failures, and misguided certainties are good for our stories, and the sneakier they are, the better the stories might be.[79]

 

 

7. The Ever-Receding "Essence" of Sosia

 

I sense my soul. I know it by sentiment and by thought. Without knowing what its essence is, I know that it exists.

Jean-Jacques Rousseau, Emile

 

I argue shortly that the reason the audiences of Tuvim and Dryden can immediately appreciate the incongruity of their protagonists' over-reliance on appearances is that both writers have exploited, not consciously but with complete assurance, our evolved cognitive tendency to think of natural kinds in terms of their invisible and yet enduring essences. First, however, I need to clarify how we move from essentializing natural kinds to essentializing individuals, for these two conceptual operations are not completely identical, and the authors in question certainly rely on the latter.

 

Commenting on the difference between kind and individual essentialisms, Gelman observes that kind essentialism "takes one crucial step beyond individual essentialism. With kind essentialism the person assumes that the world is carved into preexisting natural categories" (see fig. 1: "puppies do such things.") By contrast, "individual essentialism seems not to require any such commitment to kind realism."[80]

Thus, you may "believe that your beloved pet Fido" has something special about him, "some ÔFido-ness' that he retains over time, that would show up even if he were to morph into a frog or a human, and that he carries with him after death (e.g., into heaven, or on reincarnation)." Importantly, however, this "essence of Fido would be specific to Fido" and not something "shared with all other dogs."[81] In other words, the features that (we think) compose the "essence" of each individual specimen are not identical to the features that (we think) compose the "essence" of the natural kind to which this individual belongs.

 

Hence, as "Lisa Zunshine" I (am perceived to) have that ineffable special something that makes me me; as a human being I (am perceived to) have a different set of special qualities that align me with other specimens of my kind. So, whereas, on the one hand, "essentializing of individual people recruits much the same cognitive mechanisms as essentializing of natural kinds," on the other hand, given a specific context, we can easily differentiate between the two.[82]

 

Works of literature regularly draw on this ability, as does, for example, Michail Bulgakov's anti-Soviet satire The Dog's Heart [Sobachye Serdze] (1925). The novel centers on a scientific experiment during which a dog named Sharik is surgically transformed into a human being, "Mr. Sharikov," through receiving an implant of a pituitary gland and testicles of a dead criminal. Mr. Sharikov exhibits the criminal inclinations of the deceased owner of the gland and testicles, whose individual "essence" has thus survived both physical death and reincarnation into a body of a different species. At the same time, in a darkly hilarious nod to kind essentialism, Mr. Sharikov also possesses qualities presumably typical for any dog, such as inveterate hatred of cats. (At some point, this budding Soviet proletarian finds employment with the Committee for the Elimination of Stray Cats and is very successful at his job.) We are thus able to make sense of the nuances of Mr. Sharikov's hybrid personality because we are intuitively aware of the distinction between individual and kind essentialism.

 

 

And so now we can say that Plautus and Dryden exploit their audiences' tendency to think in terms of individual essentialism. What they do is particularly striking because they truly try to get at those "hidden, nonobvious properties that impart identity," only to demonstrate, ultimately, the futility of any attempt to define such properties.[83] That is, they systematically go through one attribute after another (one's appearance, character traits, family history, personal memories, actions, and social standing), which in principle could—but, importantly, seem not to—capture the essence of the individual.

 

Tuvim does it as well, but in a more modest fashion, keeping in mind perhaps the young age of his intended readers. Yurgan's neighbor uses the torn pocket on his jacket to clinch the argument about the boy's identity ("See that torn pocket? You yourself are Yurgan!"). The appeal to the torn pocket may transcend the purely external—and hence relatively easily dismissible—considerations of identity because the rent garment can, in fact, indicate something about the "inner nature" of Yurgan: perhaps he is sloppy, or rowdy, or absent-minded. Still, the poem's attempt to ground personality in the ownership of a jacket—even if the jacket has come to express some of its bearer's character—registers in our minds as a delightful play, a wink to the reader, who, young as she is, is already expected to know better.

 

In Plautus's play, Mercury not merely mimics Sosia's outward appearance but also appropriates his actions and memories. When Sosia attempts to hold on to his identity by first swearing that he is Sosia and then inquiring,

 

Hasn't our ship just docked from the Port of Persis?

Haven't I come from there with my master's message

Here to our doorstep, my lantern in my hand?[84]

 

Mercury eagerly responds by elaborating Sosia's story and providing details of Amphitryon's sail from Persis and his earlier battle with the Teleboean king. He even describes a war trophy that Amphitryon keeps in the special chest and intends to give to his wife as a present. Sosia is duly impressed by the stranger's knowledge: "I can't refute evidence. I'll just go and look / For another name for myself."[85] In the last desperate bid for his slipping self, Sosia tests Mercury by asking what was it that he, Sosia, did while Amphitryon was fighting the enemy on the battlefield and Sosia was left all alone in the tent—a "bit of action" that the pretender, as Sosia hopes, will "never be able to review." Mercury immediately replies that he took "a jug of wine and filled up [his own] jug"—the correct answer and the one that compels our poor hero to cry out, "But look: if I'm not Sosia, who am I?"[86]

 

Dryden draws out the discussion of what Sosia did "all alone" in the tent, mentioning the "lusty gammon of . . . bacon" that Sosia apparently devoured while hiding from the battle, and thus bringing into sharper relief the plight of the character who can no longer rely on his personal memories to protect his identity. In doing so, he follows and amplifies Moliere's version of the Roman original.[87]  He also develops further Moliere's suggestion that one's origins and family history appear to be as ineffective gatekeepers of one's self as are private memories. Moliere's Mercury overwhelms the dumbfounded Sosia with the knowledge of the intricacies of his family history, as he pontificates:

 

. . . Sosia is my name, with utter certitude,

The son of Davos, skilled in shepherd arts,

The brother of young Harpax, deceased in foreign parts,

The spouse of Cleanthis, the prude

Whose whims will make me lose my mind.[88]

 

Dryden's Mercury also recites Sosia's lineage and compounds it with yet another particular that calls on Sosia's intensely personal—and embarrassing—recollection. Speaking of Sosia's wife, he observes that she, with "a devilish shrew of her tongue and a vixen of her hands . . . , keeps [him] to hard duty abed, and beats [him] every morning when [he has] risen from her side without having first . . ."—and here Sosia hastens to interrupt and silence his story by saying that he understands him "by many a sorrowful token." (I take these tokens to be the bruises that Sosia incurs when he attempts to shirk morning sex with his wife.) Sosia then adds in a melancholy aside, "This must be I."[89] He seems to be by now deprived of whatever personal identity could be conferred by genealogy, memory, and sexual history.

 

            The social self is the one to which Sosia continues to cling fast. In Plautus's play, Amphitryon remains Sosia's last hope for refuting Mercury's claims to his identity, although this hope is complicated by Sosia's dreams of freedom from servitude. Having exhausted the "tests" that should reveal Mercury's imposture, Sosia decides to go find Amphitryon:

 

Back to the port, now; I must tell my master—

Unless he doesn't know me—why, by Jove,

Then I could shave my head for a freedman's cap.[90]

 

It appears that Sosia's conception of his identity is strongly aligned with his position as Amphitryon's slave. If it so happens that his master does not claim him as his servant, he is truly not Sosia anymore, a proposition which could be loaded with ideological overtones, depending on the historical context of the play. For example, in the case of Dryden's Amphitryon, Sosia's temporary suspension of his identity—until he hears from his master—might be taken as an ironic political comment on the mentality of some of the members of his audience.[91] (Though, knowing how much Dryden liked to provoke without committing himself to any definite political stance, it just as well might not be.)

 

What is important for my present argument is that, however ideologically suggestive we may find the idea of defining the person's self by his social position, this definition is never quite satisfactory either. We may say that different cultures vary in the degree to which they correlate people's personal identity with their social standing. Such correlation might have been significantly stronger, for example, in eighteenth-century England than it is in the twenty-first-century United States. Still, I suspect that even in those cultures in which this correlation is very strong—borne out of a long tradition and a powerful doctrinal imposition—it is still ridden with cognitive tension that may eventually translate into political action.

 

And, indeed, when Dryden's Sosia does meet his master, who immediately and familiarly threatens him with beatings—a welcome that according to Sosia's own earlier stipulation should leave no doubt in his mind that he is Sosia—it fails to convince him. Sosia readily acquiesces that he is "but a slave" and that Amphitryon is "his master," but his freshly reconfirmed social status does nothing to restore his belief that he is indeed the only true Sosia.[92] The memory of past physical abuse at Mercury's hands, coupled with the evidence of his senses, overrides the testimony provided by the actions of his master. Here is Sosia explaining to the incredulous and angry Amphitryon how he came to think that the other Sosia is he and perhaps even better than he:

 

Sosia. I could never have believed it myself, if I had not been well-beaten into it. But a cudgel, you know, is a convincing argument in a brawny fist. What shall I say, but that I was compelled at last to acknowledge myself? I found that he was very I, without fraud, cozen, or deceit. Besides, I viewed myself as in the mirror from head to foot. He was handsome, of a noble presence, a charming air, loose and free in all his motions—and saw he was so much I, that I should have reason to be better satisfied with my own person, if his hands had not been a little of the heaviest.[93]

 

Sosia's reasoning is funny, not least because of his grudging admiration for the other Sosia and his sly intimation that his "absent" self is apparently a gentleman, with "a noble presence" and freedom "in all his motions."[94] It almost seems that Sosia does not mind "sharing" his identity as long as his double's apparently elevated social status reflects well on him.

 

This scene, thus, works on many different levels, and the "cognitive" reading that I am offering here does not claim to account for its complicated overall effect on a specific audience. Still, I suspect that at least one reason that viewers have been enjoying Sosia's ludicrous exchanges with the false Sosia and with his master for the last twenty-two centuries is that these exchanges tease and "work out" our evolved cognitive tendency to essentialize individuals. Plautus, Moliere, and Dryden obligingly offer up for our consideration various personal qualities that seem to be able to capture a person's "essence" and then invite the audience to laugh at the naivete of any character who buys such a reductionist reading of his identity. The laughter, however, barely covers, and is made more poignant by, a certain amount of anxiety.

 

Here is why. On the one hand, viewers are reminded, as they witness Sosia's misadventures, that one's appearance certainly does not define one's identity; one's name certainly does not define one's identity; one's social standing does not quite define identity; one's memories do not quite define identity; one's origins do not quite define identity; one's actions do not quite define identity—although in different cultural-historical settings, each of these "nondefining qualities" would be weighed differently in relation to the others.

 

On the other hand, precisely because the "essences" that we attribute to individuals cannot be captured—for thinking that there is an essence is a function of our cognitive makeup rather than a reflection of the actual state of affairs—some nervousness will always accompany any failed endeavor to capture the "core" of the person. Certain "what ifs" will perpetually hover over such endeavors, and one can easily think of cultural discourses and practices that seem to vindicate those "what ifs."

 

For example: What if the person's appearance really expresses something crucial about her core being, and we simply haven't yet found the correct way to map one onto the other? The persistence of various sumptuary laws, such as the English Sumptuary Statutes, enforced from the fourteenth to the seventeenth century, reflects, at least in part, this nagging suspicion. A sub-version of this particular what if is the notion that you are what you eat; sumptuary laws, after all, governed not just people's dress but also their diet. Although we may no longer share the following sentiment, we certainly see how in Shakespeare's times people could believe that the "roast beef of Olde England was character-building food, stout-fare for stouthearted men, while it was widely presumed that a vegetable diet made men weak, timorous, and effeminate."[95]

 

Another example: what if one's memory is all that truly differentiates one person from another? Numerous science fiction stories as well as movies such as Total Recall play with this idea.

 

Or, what if one's actions "define" one's identity? Think of Saul A. Kripke's argument in Naming and Necessity (1980) that if "Hitler had never come to power, Hitler would not have had property which . . . we use to fix the reference of his name." For Kripke, Hitler's "most important properties" consist in his "murderous political role," just as "Aristotle's most important properties consist in his philosophical work." Of course Kripke immediately adds that "important properties of an object need not be essential, unless Ôimportance' is used as a synonym for essence," but the very fact that he feels compelled to make this distinction shows that this kind of importance can indeed be casually used as a stand-in for essence.[96]

 

Or, what if one's social class truly determines what one is? A broad range of Marxist arguments build on this notion.

 

Or, what if one's origins determine it? To make such an argument today one would use, or rather misuse, research on DNA and genes. But even apart from misinterpreting genetics, the argument that one's parentage gets very close to capturing one's essence can be difficult to refute. Again, Kripke acknowledges this difficulty when he asks whether the Queen of England—"this woman herself"—could "have been born of different parents from the parents from whom she actually came? Could she, let's say, have been the daughter instead of Mr. and Mrs. Truman?" His answer is that, whereas we can imagine numerous scenarios in which "various things in [Elizabeth's] life could have changed, . . . what is harder to imagine is her being born of different parents. It seems to [him] that anything coming from different origins would not be this [woman]."[97] Mercury thus must have dealt Sosia quite a blow when he told him that he is the "son of Davos, skilled in shepherd arts" and the "brother of young Harpax, deceased in foreign parts." If Mercury is that father's son and that man's brother, then he must be Sosia.

 

Yet another what if is a combination of several what ifs. One can say, for example, that what makes Sosia Sosia is not just that he looks like Sosia and wears his apparel, that he is the son of Davos and brother of Harpax, and that he is Amphitryon's servant, and that he used his time alone in the tent to steal wine, but that all these qualities come together in one individual. To adapt Kripke's argument, the essence of the person would thus be determined "not by a single description but by some cluster or family." Of course, as Kripke points out, such reliance on the "bundle of qualities" is just as illusory: if a quality that we seek to define—such as the essence of a person—is "an abstract object," then "a bundle of qualities is an object of an even higher degree of abstraction."[98] As such it can hardly capture anything.

 

Particularly if there is nothing to capture. But, because we (cannot help but) assume that some essence is there, our failure to capture it—again and again and again—does not invalidate our implicit belief in it. Instead, this failure fosters the continuous uncertainty about what does or does not truly express the "core" of a person, an uncertainty that takes myriad cultural forms. Let us see now how a staged exploration of this uncertainty—particularly when it takes a comic form—can do more than just titillate the audience's essentialist biases. Let us see how it can further engage these biases by presenting viewers with a visual embodiment of a conceptual conundrum—two people who look exactly the same and hence might share the same essence. (Mightn't they?)

 

 

8. Identical Twins and Theater

 

[. . . ]

           

9. How Is Mr. Darcy Different from Colin Firth?

 

"I am sure I'm not Ada," [Alice] said, "for her hair goes in such long ringlets, and mine doesn't go in ringlets at all; and I'm sure I can't be Mabel, for I know all sorts of things, and she, oh! she knows such a very little! Besides, she's she, and I'm I, and—oh, dear, how puzzling it all is!"

Lewis Carroll, Alice in Wonderland (1865)

 

I have focused so far on two recurrent themes that build on and play with our essentialist proclivities—the twin motif and the motif of comically mislaid identity. In the concluding chapters of this part, I want to indicate the range of other narrative engagements with these proclivities by considering examples from several novels and an autobiography.

 

I want to show, in particular, in what endlessly nuanced way writers exploit both our eagerness to fix an essence of a given character and our readiness to admit that we have failed, once more, to capture that essence. Contradictory as these two mental stances may seem, they are in fact complementary given the nature of the phenomena that they reflect. Because we grasp for an entity that represents a function of our evolved cognitive makeup rather than a real thing that exists in the world, both our hope and our consciousness of failure are inevitable.  Balancing between the two is the stuff our stories are made of.

 

And so to my first example, which comes from Helen Fielding's novel Bridget Jones: The Edge of Reason (1999). Halfway into the story, the irrepressible Bridget flies to Rome to interview the actor Colin Firth, who played Fitzwilliam Darcy in the BBC's 1996 cult production of Austen's Pride and Prejudice. The occasion is a double opportunity for Bridget. If the interview comes off well, this one-time assignment may lead to a "regular interview spot" with the Independent (or so she hopes).[99] More important, Bridget is infatuated with Firth in the role of Darcy. This is her chance to get on Firth's radar, and who knows what that may lead to?

 

Bridget has been instructed by a "bossy man" from the Independent to talk to Firth about his new movie, Fever Pitch (1997).[100]  She has been explicitly told not to discuss with Firth his role of Mr. Darcy, for the staff of the Independent is well aware that, given a chance to talk to Firth, any British woman in her thirties would inevitably turn the conversation to Pride and Prejudice. Bridget proceeds accordingly:

 

BJ: What do you see as the main differences and similarities between the character Paul from Fever Pitch and  . . . ?

CF: And?

BJ: (Sheepishly) Mr. Darcy.

. . .

CF: Mr. Darcy is not an Arsenal supporter.

BJ: No.

CF: He is not a schoolteacher.

BJ. No.

CF: He lived nearly two hundred years ago.

BJ. Yes.

CF: Paul in Fever Pitch loves being in a football crowd.

BJ. Yes.

CF: Whereas Mr. Darcy can't even tolerate a country dance. Now. Can we talk about something else that isn't to do with Mr. Darcy?

BJ. Yes.

 

Bridget and Firth then talk about the future of British movies, Firth's Italian girlfriend, Mr. Darcy, Firth's new movie project, Mr. Darcy, Fever Pitch, Firth's Italian girlfriend, and, finally, Mr. Darcy again:

 

BJ: . . . What was it like with your friends when you started being Mr. Darcy?

CF: There were a lot of jokes about it: growling, "Mr. Darcy" over breakfast and so on. There was a brief period when they had to work quite hard to hide their knowledge of who I really was and—

BJ: Hide it from who?

CF: Well, from anyone who suspected that perhaps I was like Mr. Darcy.

BJ: But do you think you're not like Mr. Darcy?

CF: I do think I'm not like Mr. Darcy, yes.

BJ: I think you're exactly like Mr. Darcy.

CF: In what way?

BJ: You talk the same way as him.

CF: Oh, do I?

BJ: You look exactly like him, and I, oh, oh . . .

(Protracted crashing noises followed by sounds of struggle)[101]

 

The interview thus closes with Bridget finally losing the self-control that she has fought hard to maintain for the last forty-five minutes and throwing herself on sexy "Mr. Darcy." The scene is hilarious for many reasons, but note how its jokes and ironies build on our essentialist biases, even though we are not aware of it as we read it.

 

To begin with, Bridget seems to commit the same fallacy as Sosia and Yurgan by trying to persuade Firth that since he looks and talks "the same way" as Mr. Darcy, he is "exactly like Mr. Darcy," which, given that Mr. Darcy does not exist, is as close as one can come to saying that Firth "is" Mr. Darcy. We find this logic funny for precisely the same reason that we find Sosia's and Yurgan's logic funny: we know better. We know the difference between the real and the fictive, and we know the difference between "is" and "like." We know that, however long a list of features that Colin Firth shares with Mr. Darcy one may compile (and Bridget's list, incidentally, is about as sophisticated as the one that convinces Yurgan), that list will always fall short.

 

But why is Bridget oblivious of all this? Her brain must be addled by her recent romantic disappointment (involving, of all people, a man called "Mark Darcy"), by too much coffee, nicotine, and alcohol, and by overexposure to pop culture. Such a silly Bridget!—we are delighted with the joke.

 

Not so fast though. The reason the joke works so well is because it is on us as much as it is on Miss Jones. Or, to put it differently, we are able to recognize it as a joke and appreciate it precisely because it builds on a certain conceptual vulnerability that we all share with Bridget. It just so happens that at this point she is the one who falls victim to this vulnerability (and in such an undignified manner, too).

 

This vulnerability is bound with our essentialism, of course. To see how it works, first let us reconstruct the literary pedigree of Bridget's comic fall. To do so, we need to return to the point I made earlier when I suggested that had we—you or I—been put in Sosia's position (that is, faced with a pretender to our identity) we would have pegged the evidence of our personal uniqueness on the same conceptual knobs that Sosia does: name, parentage, personal history, memories, and social standing. Some of these knobs would have seemed sturdier than others (say, parentage and personal history[102]), but ultimately none of them would have worked. Nothing would have worked—but what other choices would we have had as we reached out into the physical world to grab what really is our own cognitive bias? Ultimately, we would be left out-argued, and so our last decisive statement about our unique identity would be very similar to the one made by Rousseau about his soul: "I sense my soul. I know it by sentiment and by thought. Without knowing what its essence is, I know that it exists."[103] An obviously irrefutable report of one's emotional intuition, but not what you would call a strong logical or philosophical point.

 

This is to say that it is good that we are never really put in Sosia's position because what his outlandish adventures make clear is that relying on the list of attributes that you think make you you will not win that argument. Ultimately, one has to fall back on one's unarticulated essentialist intuitions: "I know that there is something special that makes me me." Of course, no Mercury would ever let you get away with this. Sadistic deities are all philosophers-in-training.

 

This is what works of fiction do to us, then. They create contexts in which characters are forced to come up with a list of attributes that are supposed to define their essence.  These lists are typically ridiculous—from the one in Amphitryon to the one in Alice in Wonderland, quoted in the epigraph to this section—and how can they be anything but ridiculous, given the nature of entity that they purport to capture and describe?

 

This is, then, the old literary tradition that Fielding works with here. First Colin Firth and then Bridget herself are forced to list features that make people different or the same. Of course, Firth is made to sound ironic when he contrasts his two recent characters: "Darcy is not an Arsenal supporter . . . He is not a schoolteacher . . . He lived nearly two hundred years ago . . . Paul in Fever Pitch loves being in a football crowd . . . Whereas Mr. Darcy can't even tolerate a country dance." Still, ask yourself, what makes this irony possible?[104] Firth is flaunting the superfluousness of his arguments, saying as it were, "This conversation is silly to begin with because what does it matter that Mr. Darcy is not a schoolteacher like Paul? Even if he were a schoolteacher, he still would have been different from Paul because we know that Mr. Darcy is Mr. Darcy and Paul is Paul—there is something so fundamentally distinct about them that we can list these superficial differences until the cows come home and it would not add any extra information to what we all already know."

 

In other words, the reason that Firth can afford to be ironic is that he is an essentialist talking to another essentialist in a book written by an essentialist for an audience of essentialists. We just know that Paul and Mr. Darcy are different and, in the final count, that's what ought to settle it.

 

So the underlying "cognitive" joke of it all is that Firth may sound ironic and reasonable and Bridget may sound earnest and unreasonable, but both of them appeal to the same essentialist biases. Only Firth relies on these biases to prove-as-it-were-without-proving that there is something fundamentally different between Mr. Darcy and Paul (and then himself and Mr. Darcy) whereas Bridget relies on them to prove that there is something fundamentally the same about Firth and Mr. Darcy. I have already spelled out Firth's unspoken essentialist argument; let me now spell out Bridget's:

 

            Let us say we list all qualities that we can think of which make Colin Firth different from the personage he plays, Mr. Darcy. So after we are done with every single personal attribute, external and internal, which could be described in any human language, and after we have addressed the questions of "reality" and "fiction," can't it still be said that perhaps deep down (it's always deep, isn't it?—has to be deep, for this is how our essentialism works), there is still something ungraspable, undescribable,  elusive, but oh-so-important that these two have in common (that they almost physically share in some mysterious fashion), that if only we could describe that something or bring it out in the open for everybody to see, then everybody would be immediately persuaded and would agree that, yes, this something makes them so profoundly and so uniquely similar that who can say that on some level they are not the same?[105]

 

            You may or may not find this argument persuasive. Note that I am not saying that we actually go around thinking in these terms. I am merely saying that now we can understand why we may find this argument persuasive when it is put before us in this fashion.

 

The suspicion that something is there is what in part sustains the mystique of the movies. Our cognitive predisposition to see living beings in terms of their invisible but enduring essences makes it very difficult—perhaps impossible?—to say that the actor and the character whom he portrays (or the two different characters whom he portrays on different occasions) do not share some essential quality that we have not grasped just yet but may still grasp in the future. (Hope springs eternal for those who look for essences.) And if they do share it, how can we argue that on some very important level they are not the same?

 

This means that we can never settle decisively on either Bridget's or on Colin Firth's side of the argument. Because both of them tap our essentialism in their absolute claims—Firth:  "I do think I'm not like Mr. Darcy, yes." Bridget: "I think you're exactly like Mr. Darcy"—neither can be proven unambiguously wrong.

 

And then it is up to Fielding's individual readers to decide how well these respective appeals to essentialism serve her characters. I would say that protecting his own as well as Paul's and Mr. Darcy's "essential" differences, Firth comes off as a bit too stuffy (which, ironically, realigns him with Mr. Darcy and thus supports Bridget's view of him). Also, Firth may end up feeling annoyed by Bridget's attack, but, in fact, it could have been much, much worse. After all, the old literary tradition to which the character "Firth" belongs is the tradition according to which the characters who are forced to define or defend their unique identity end up either badly beaten up (Sosia) or cuckolded (Amphitryon) or sent falling down the rabbit hole (Alice).

 

As to Bridget, who seems to be the butt of the joke in this scene, her essentializing may have actually taken her someplace nice. Because for a short while she can't tell (refuses to tell?) the difference between the real man and the fictional character, she briefly gets to inhabit a dream world, in which she can talk to and has a close (though, admittedly brief) physical contact with the man of her fantasy: Colin Firth—Fitzwilliam Darcy—Colin Darcy—Fitzwilliam Firth—movie star—romantic novel protagonist. Not a bad bargain, on the whole. Essentialism has its moments.

 

 



[1] Parts of the following discussion have originally appeared in  my "Essentialism and Comedy."

[2] See, for example, Kripke, Naming and Necessity, 45n13 and 144-46.

[3] All translations of Tuvim are mine.

[4] I know that I could not have been older than seven, for I had not started school yet: in Soviet Russia, children didn't begin formal schooling until they were seven.

[5] Plautus, 25, 26.

[6] Passage and Mantinband, "Preface," n.p.

[7] Quoted in Milhous and Hume, 201.

[8] Quoted in Milhous and Hume, 217-18.

[9] Dryden, Amphitryon, Act II, ll. 305-310, 311-313.

[10] For a useful discussion of such reduction, see Luis Menand's "Dangers Within and Without." As he puts it,

A painting or a novel is a report on experience. There is a huge temptation, which is heavily reinforced by culture, to universalize these reports, to imagine them as uniquely valid accounts of a permanent human nature. This is a position on the road to ornamentalism. A nineteenth-century novel is a report on the nineteenth century; it is not an advice manual for life out there on the twenty-first-century street. But a nineteenth-century novel belongs to the record of human possibility, and in developing tools for understanding the nineteenth-century novel, we are at the same time developing tools for understanding ourselves. These tools are part of the substance of humanistic knowledge. (15)

[11] For a useful overview of the current state of the field of cognitive literary studies, see Alan Richardson, "Studies in Literature and Cognition."

[12] Fuss, xi.

[13] On essences of quasars, see Rips, 43.

[14] Atran, 83. For a related discussion of Aristotelian essentialism, see Atran, Cognitive Foundations of Natural History, 83-122. For a detailed discussion of developmental differences of children's perception of artifacts, see Frank C. Keil's Concepts, Kinds, and Cognitive Development and, more recently, Keil, Marissa L. Greif, and Rebekkah S. Kerner's "A World Apart" and Deborah A. Kelemen and Susan Carey's "The Essence of Artifacts."

[15] Atran, "Strong vs. Weak Adaptationism," 143; Gelman, "Two Insights," 204; 206; 204-5.

[16] Quoted in Atran, Cognitive Foundations, 84.

[17] Rousseau, 276.

[18] For a critique of this experimental setup, see Strevens, 157.

[19] Bloom, 47.

[20] The discussion of Bt and organic farming comes from Taverne, 65-66.

[21] Medin and Ortony, 184.

[22] Note that a log is a potentially ambiguous example. In its "former existence" (as an oak, for instance) it would be classified as an organic object and thus conceptualized in terms of its underlying essence, rather than its function.

[23] Susan A. Gelman, The Essential Child, 246.

[24] Bloom, "Intention," 18.

[25] Bloom, "Intention," 23. See also Bloom's Descartes' Baby, 54-57. Note, too, the joint study of Gelman and Bloom, in which they demonstrate that "intuitions about intent play an important role in how children name artifacts. Even 3-year-olds (the youngest age group tested) take intentionality into account when deciding what to name an object; they are more prone to use an object name when the object is described as purposefully created, and to describe the substance when the object is described as the result of an accident" ("Young Children Are Sensitive to How an Object Was Created," 98-99). See also Gil Diesendruck et al., "Children's Reliance on Creator's Intent in Extending Names for Artifacts." Finally, see Marc D. Hauser and Laurie R. Santos's "The Evolutionary Ancestry of Our Knowledge of Tools" for a useful broader overview of three core groups of views on conceptual acquisition, of which the "Teleo-Functional hypothesis" (as put forth by Keil and extended by Deborah Kelemen) and the "Intentional History perspective" (as championed by Bloom) represent the two subcategories of the larger "domain-specific" view (270-275).

[26] Rips, 44.

[27] Ibid., 46; emphasis in the original. See also Bloom, "Intention," 3-6, for a discussion of experiments carried out by Barbara Malt and E. C. Johnson that similarly problematized the "function-based accounts of artifacts concepts."

[28] Bloom, "Intention," 18. Compare to Barsalou, Sloman, and Chaigneau's argument the history of artifacts' use: If "a particular hammer might only be used as a paper weight, . . . this non-standard function may dominate the hammer's use history, thereby obscuring its standard role" (134).

[29] Bloom, "Intention," 17 (emphasis in the original);19.

[30] Bloom, "Intention," 25.

[31] Gelman, The Essential Child, 8.

[32] For a possible, though not uncontroversial, link between the research of Atran, Gelman, Keil, and Bloom and studies in cognitive neuroscience, see Hanna Damasio's recent report that a patient with "a lesion in the left temporal lobe, but located posteriorly in the temporo-occipital junction away from the temporal pole, as well as from Wernicke's and Broca's area, will show a deficit in the retrieval of words denoting manipulable objects, [whereas] the retrieval of words for persons or animals is entirely normal" (11). See also an example from the work of Oliver Sacks, who describes a patient tentatively diagnosed with the neurological disorder known as Posterior Cortical Atrophy, who cannot, in particular, recognize the words that she reads (although her vision as such is perfect). When the patient, Anna H., was introduced to Sacks, she was not able to decipher the words shown to her, such as "cat." She "could, nevertheless, correctly sort them into salient categories, such as Ôliving' or Ônon-living,' even though she had no conscious idea of their meaning" (64).

[33] Gelman, Essential Child, 9, 22. See also Medin and Ortony, 184. For a view on essentialism as a placeholder when applied to artifacts, see Deborah A. Kelemen and Susan Carey's "The Essence of Artifacts." As they put it, "Developmental parallels exist to indicate that just as children have to construct a vitalist understanding of living things (Hatano and Inagaki 1999), along with an understanding of species based on reproductive transmission (Solomon et al. 1996), so too children must construct the design stance—the intentional-historical scheme that makes full sense of artifact kinds in terms of their intended function. In other words, full insight into artifact kinds is not a given. Early in childhood, all essences are placeholder essences, including those for artifacts" (216).

[34] Tumbleson, 61.

[35] Compare to Barbara Tversky's argument in "Form and Function" that we often categorize strangers (but not people we know) in "nearly purely functional" terms: "corporate lawyer, supermarket cashier, department store clerk" (340).

[36] Hirschfeld, "The Conceptual Politics," 86.

[37] Astuti, Solomon, and Carey, 5.

[38] Hirschfeld argues, for example, that we "are simply not likely to rid ourselves of racialist thinking by denying that [such thinking] is deeply grounded in our conceptual endowment" (Race in the Making, xiii), and he shows how, given specific cultural-historical circumstances, our cognitive propensity for essentializing natural kinds can translate into racism. Although "the races as socially defined . . . do not (even loosely) . . . pick up genuine reproductive populations" (4), essentialism can exploit such properties as skin color to construe social identities as "natural." (For further discussion, see also Bloom, Descartes' Baby, 49-51).

Kurzban and his colleagues have conducted a series of experiments which demonstrate that categorizing individuals by race is not inevitable. They support an alternative hypothesis: that encoding by race is instead a reversible byproduct of cognitive machinery that evolved to detect coalitional alliances. The results of their experiments show that subjects encode coalitional affiliations as a normal part of person representation. "More importantly, when cues of coalitional affiliation no longer track or correspond to race, subjects markedly reduce the extent to which they categorize others by race, and indeed may cease doing so entirely. Despite a lifetime's experience of race as a predictor of social alliance, less than 4 min of exposure to an alternate social world was enough to deflate the tendency to categorize by race. These results suggest that racism may be a volatile and eradicable construct that persists only so long as it is actively maintained through being linked to parallel systems of social alliance. (15387).

In other words, as a "volatile, dynamically updated cognitive variable" that can be but does not have to be used for detecting coalitions, race can be "easily overwritten by new circumstances." As Kurzban and his coauthors suggest, if "the same processes govern categorization outside the laboratory, then the prospects for reducing or even eliminating the widespread tendency to categorize persons by race may be very good indeed" (15391).

Cognitive evolutionary psychology thus provides strong empirical support for the arguments advanced by theorists of cultural studies, such as Stuart Hall, who show that race is a culturally constructed category. In his videotaped lecture at Goldsmith College, Race: The Floating Signifier (1996), Hall points out that race is not a "matter of color, hair, and bone." It is not fixed and secure in its meaning. Instead it is a "floating signifier"—always constructed by specific historical discourses and cultural circumstances. And, as Kurzban shows, just a four-minute manipulation of those cultural circumstances in the social mini-world of the lab deflates the tendency to see race as a meaningful category. People simply stop reading "color, hair, and bone" (to use Hall's apt expression) as indicative of any "essential" qualities of the members of various teams, once they are "cued" to construct coalitional affiliations based on other personal characteristics.

            Evolutionary psychology similarly bolsters another central claim of Race: The Floating Signifier. Hall is aware of the general appeal of essentialist thinking. He calls breaking down people into groups according to their "essentialized characteristics" a "very profound cultural impulse," a "generative" move, which allows you to "predict whole ranges of behavior." A possible weakness of this argument is that it is not clear why this impulse should be so powerful to begin with. But if we consider essentializing a cognitive impulse—grounded in the particularities of our evolutionary history and thus having stuck with our species for better and for worse—we can see it as no more mysterious or powerful than our tendency to have weak knees. The design of our knee joints reflects our evolutionary history, and our knees do carry us through most of the day, but it is clearly not the best design out there. Essentializing serves us well on some occasions (e.g., by allowing us to make new inferences about our environment) and lets us down quite spectacularly on others; knowing how it could have come about is conceptually liberating.

Moreover, here is the most important reason why cultural critics would want to recognize racism as parasitizing on broader cognitive tendencies involved in our categorization processes. If our predilection for essentializing living beings testifies to our evolutionary heritage rather than to the existence of any actual essences, then any deduction of "essential" personal qualities from "color, hair, and bone" is a priori meaningless. In other words, with its emphasis on cognitive construction of essences, the cognitive-evolutionary perspective proves the strongest weapon in the arsenal of a cultural critic arguing that race is a "floating signifier."

[39] Other important questions explored at length in Gelman's study that I will not discuss here because it is simply impossible to do them justice in the present limited space are the following: What happens if we complicate the experiments involving appearance-changing animals, for example, by showing that we have "replaced" their innards completely? How do we explain the persistence of the popular and scientific misconception that young children show a "bias toward phenomenism (reporting just appearance, for both appearance and reality)" and on the whole perceive the world "solely in terms of surface appearances" (74)? How and when does childhood essentialism become "integrated with scientific and cultural knowledge" (287)? What is the difference between essentializing and categorizing?

[40] Gelman  made this point in her reader's report of my book for the Johns Hopkins University Press. For the full articulation of these positions, I refer you to Atran's Cognitive Foundations of Natural History and to Gelman's The Essential Child: Origins of Essentialism in Everyday Thought (2003). I won't be able to do justice to their detailed arguments here as I will only touch the points immediately relevant for our present discussion.

[41] Atran, Cognitive Foundations, 63, 6.

[42] Hilary Putnam notes that "A three-legged tiger is still a tiger. Gold in the gaseous state is still gold" ("Is Semantics Possible?" 140). And as Atran puts it, "a three-legged tiger is still presumed to be a quadrupled by nature, but a three-legged or bean-bag chair is not, although most chairs are quadrupedal" ("Strong versus Weak Adaptationism," 146).

[43] Gelman, "Reader's Report."

[44] The Pleistocene period, during which most of our current cognitive adaptations evolved, lasted roughly two million years.

[45] Gelman, The Essential Child, 323; 313; 323. See also Michael Strevens's "The Essentialist Aspect of Na•ve Theories."

[46] Gelman, The Essential Child, 15. For a useful discussion of the difference between "proper" and "actual" domains in the context of modularity, see Sperber's Explaining Culture and "Modularity and Relevance."

[47] Atran, 6.

[48] See Bloom, Descartes' Baby, 54-57 and Bering, "The Folk Psychology of Souls."

[49] Compare to the argument advanced by Astuti, Solomon, and Carey, who studied the acquisition of folkbiological and folksociological knowledge among the Vezo, in Madagascar. Astuti and her colleagues found that "in spite of the vastly different cultural contexts of development, most Vezo adults, like their North American counterparts, understand that bodily features are determined by a chain of causal mechanisms associated with birth, whereas beliefs are determined by upbringing" (33). What is interesting for my current argument is that when responding to the questionnaire that asked them to correlate certain features of adopted children either with their birth parents or with their adoptive parents, the Vezo participants felt that they needed to comment on some answers but not on others: The distribution of comments indicates that "participants' decisions as to whether a particular judgment was worthy of explanation is significantly associated to whether the judgment departed from the understanding that birth parentage determines bodily traits and that nurture shapes beliefs. This suggests that whenever participants gave judgments that departed from [this] Differentiated pattern, they were aware of the violation implied by their judgments and therefore made the effort of commenting on it" (38-39). I see these findings as relevant to my parenthetical argument because it seems that we need special terms for—that is, we make "the effort of commenting on"—the situations in which artifacts are viewed in terms of their essences (as in "fetishization") and living beings in terms of their functions (as in "objectification").

[50] Gelman, The Essential Child, 49 (my emphasis). In making this point, Gelman engages with and qualifies the influential earlier work on categorization of Eleanor Rosch. For discussion, see Gelman 2003: 49.

[51] But, one may ask, why is this crucial? Why isn't it enough simply to have empirically observed time and again that tigers pray on humans? Why would the assumption that a particular tiger will prey on me require a commitment to anything ineffable or invisible? All it seems to require is having seen tigers prey in the past. To this I answer that unless we commit (however unconsciously) to the notion that there is something that all tigers have in common, there is no way of explaining why we assume that even though all tigers that I have met in the past preyed on humans, the next tiger that I encounter will also prey on humans. (I am grateful to MJ Devaney for posing these probing questions.)

[52] Foer, 155-156.

[53] Foer, 154.

[54] You can find the full account of the interaction between Draco and Snape in J. K. Rowling's Harry Potter and the Chamber of Secrets (267). I have purposely withheld the details of the situation so as to make it as parallel to the following "doors" example as possible.

[55] In Why We Read Fiction, I discuss possible cognitive underpinnings of our ability to treat a fictional character (such as Snape) as a real-life person. For the purposes of the present argument I do not consider this issue at all, for, after all, the doors that I discuss in the next paragraph are not "real" doors either.

[56] Rowling, 132.

[57] For a discussion of conceptual blending, which represents an important alternative framework for approaching hybrid entities, see Turner's The Literary Mind and Fauconnier and Turner's The Way We Think.

[58] One may speculate that the difficulties we experience in defining the doors' "family" may have something to do with what Keil, Greif, and Kerner see as the difference between the way we can easily hierarchize living kinds but not artifacts. As they put it, "There is an immediate and compelling sense that living kinds are embedded in a unique taxonomy that is not arbitrary [as discussed by Atran]. For most artifacts, however, it seems that many alternative hierarchies are possible for the same kinds. Indeed, some artifacts do not seem to fit easily into any hierarchies at all. For example, a fancy stereo system can be placed in a hierarchy of furniture, of electronic devices, or of toys" (237-38).

[59] Compare this to Gelman's observation that three and four-year old children treat the "domain of people as special." She has found "that children treated adjectives for people as more powerful than adjectives for nonpeople" ("Two Insights about Naming in the Preschool Child," 51).

[60] It could be argued that in "other" cultures (always a hopeful location), people are much more ready to take their animism, that is, "the transfer of notions of underlying causality from recognized (folk)biological to recognized nonbiological kinds," literally (Atran, Cognitive Foundations, 217). This, however, is not the case, as was demonstrated by Margaret Mead in 1932, Frank Keil in 1979, and Gelman in 1983. For a discussion of their respective findings, see Atran 1990: 216-217.

[61] Gelman, The Essential Child, 285.

[62] Mayr, Populations, 4.

[63] Mayr, "Darwin and the Evolutionary Theory in Biology," 2.

[64] Mayr, Populations, 5.

[65] Quoted in Mayr, One Long Argument, 41; 49.

[66] Gelman, The Essential Child, 66.

[67] Gelman, The Essential Child, 66.

[68] Gelman, The Essential Child, 295.

[69] Gelman, The Essential Child, 283.

[70] For discussion, see Gelman and Wellman's "Insides and essences," 213 and McIntosh, "Cognition and Power." Also, compare to Kripke's argument about essentializing "proper names" (even if he does not call it "essentializing" in that particular context) and "certain terms for natural phenomena, such as Ôheat,' Ôlight,' Ôsound,' Ôlightning'" as well as certain "corresponding adjectives—Ôhot,' Ôloud,' Ôred'" (Naming and Necessity, 134).

[71] Compare this to Porter Abbott's suggestive argument in "Unnarratable Knowledge."

[72] Grosz, Time Travels, 36-38.

[73] Mayr, quoted in Gelman, The Essential Child, 296.

[74] Gelman, email exchange, February 4, 2007.

[75] Compare to Dan Sperber's argument about novel cognitive competencies. As he puts it, "Over historical time, humans have adapted to very diverse natural and man-made environments and have, for this, developed novel cognitive competencies" ("Modularity and Relevance," 54). Developing novel cognitive competencies is very different, however, from developing novel cognitive adaptations.

[76] See Tooby and Cosmides, "The Psychological Foundations of Culture," 53-55; Cosmides and Tooby, "Origins of Domain Specificity," 87, and "From Evolution to Behavior," 293.

[77] For a discussion of the mismatch between the modern environment and the psychological mechanisms adapted to Pleistocene environments see Ohman and Mineka, "Fears, Phobias, and Preparedness."

[78] Gelman, email exchange, February 4, 2007.

[79] For a related argument on gaps, see Ellan Spolsky, Gaps in Nature. On failures, see Spolsky, "Purposes Mistook."

[80] Gelman, The Essential Child, 152.

[81] Gelman, e-mail communication, 9.17.03.

[82] Gelman, The Essential Child, 152.

[83] Gelman, The Essential Child, 152.

[84] Plautus, 23.

[85] Ibid., 24.

[86] Ibid., 25.

[87] Dryden, II, l. 295; 296-7.

[88] Passage, 147.

[89] Dryden, II, ll. 248-254.

[90] Plautus, 26.

[91] Dryden, II, ll. 329-335. David Bywaters, for example, makes the observation that "Dryden's Jupiter is a god appropriate to those who chose William as kind" (62).

[92] Dryden, III, l.8.

[93] Ibid., III, ll. 126-138.

[94] Ibid., III, l. 152.

[95] Shapin, 82.

[96] Kripke, Naming and Necessity, 75, 77; emphasis in the original. Of course, I am taking certain liberties with Kripke's argument by treating his notion of a name interchangeably with his notion of an identity. My rationale for doing so can be best expressed by Scott Soames's observation in Beyond Rigidity (2002). As Soames sees it, "Nowhere in Naming and Necessity, or anywhere else, does Kripke tell us what the semantic content of a name is; nor does he tell us precisely what proposition is expressed by a sentence containing a name. The perplexing nature of this gap in his analysis may be brought out by the following speculation: If the semantic content of a name is never the same as that of any description, then it seems reasonable to suppose that names don't have descriptive senses, or descriptive semantic contents, at all. Moreover, if names don't have descriptive semantic contents, then it would seem that their only semantic contents are their referents" (6). See also Jesse J. Prinz's referring to Kripke to illustrate a "standard view . . . that members of a natural kind all share a common underlying essence" (Gut Reactions, 81).

[97] Kripke, Naming and Necessity, 112-13.

[98] Kripke, Naming and Necessity, 31, 52.

[99] Fielding, Bridget Jones, 135.

[100] Fielding, Bridget Jones, 125.

[101] Fielding, Bridget Jones, 138-9, 143.

[102] See my discussion of Kripke's Naming and Necessity in section 7.

[103] Rousseau, 283.

[104] Note how Jonathan Culler's distinction between irony and sarcasm is useful here. As Culler sees it, "irony always offers the possibility of misunderstanding. No sentence is ironic per se. Sarcasm can contain internal inconsistencies which make its purport quite obvious and prevent it from being read except in one way, but for a sentence to be properly ironic it must be possible to imagine some group of readers taking it quite literally" (Structuralist Poetics, 180). In this particular case, Firth's list of attributes is intended as ironic, but it can also be taken quite seriously, because if somebody were forced to delineate the differences between Paul and Mr. Darcy, the differences listed by Firth could certainly be included; in fact, several of them seem to fall into Kripke's category of "essential" properties.

[105] For a suggestive related discussion of "porous boundaries" between fiction and reality, see Skolnick and Bloom, "The Intuitive Cosmology of Fictional Worlds," 84. Also, as Alan Nadel observes, a belief in identical essences is "to some degree requisite to facilitate Olivia's facile transfer of love from Viola to Sebastian" in Shakespeare's Twelfth Night (e-mail communication, October 19, 2007).