Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

James Burton: Posthuman or Postandroid: Philip K. Dick’s Androidization

DOWNLOAD/PRINT

Philip K. Dick is known for his deconstructions of the opposition android/human. In an often-cited speech given in Vancouver in 1972, he suggested that in his earlier writing he had presented robots, simulacra, androids and the like as artificial forms “masquerading as humans”—but that he had since come to realise that this theme was now obsolete. For “the constructs do not mimic humans; they are, in many deep ways, actually human already.” He talks of “a gradual merging of the general nature of human activity and function into the activity and function of what we humans have built and surrounded ourselves with” (Dick 1995: 183). A little further on in the speech, he imagines a future scenario in which a human shoots a robot; as it falls to the ground, the robot shoots the man. The robot lies there bleeding to death, while a wisp of smoke comes out of the mechanism where the man’s heart should have been. Both are surprised by this.

This deceptively simple narrative reveals an unexpected degree of complexity when we consider the series of inversions of perspective it enacts, and the ways it embeds different levels of observation within one another. In so doing, it can be said to function as a condensed form of the more extended process by which Dick undermines the status of the human in many of his novels. The surprise of both actors suggests that both are recognising that they have committed a special kind of category mistake[1]—one involving and revealed by self-referentiality and self-observation, in that it involves the categories in which they had taken their own identities or ontological/biological status to be contained. The mutual surprise also serves to underscore the unexpected affinity between the two actors, in that the self-conception of the identity of each is bound to the way they understand that of the other. If my definition of a living human is based on a biological conception involving characteristics such as a cardiovascular circulatory system, this definition may be challenged when I am confronted by either a human who does not bleed, or a humanoid machine who does; and the same is simultaneously true for a being who identifies itself in part through the absence of biological systems within its constitution. In other words, both actors are subject to the same category mistake and the same surprise that they might belong to the same category—their shared capacity for being surprised itself underscoring and replicating that which they are surprised at. At a further level of remove, our implication within the scenario as second-order or even third-order observers (for one could argue that the actors in the story become second-order observers of themselves) who likely share with the actors a certain interest in our own ontological or biological status, invites us to question our own human/machine categories. Indeed, this micro-story has the property of so many of Dick’s longer fictions, and arguably his thinking in general (e.g. as manifest in the “Exegesis”), of inviting us to go on extrapolating to yet another, and then another level of context and observation, until we find that one of these larger contexts has placed us back where we started (but freshly so, with the experience of having gone through the circuit now having an effect): the very terms in which I (re)describe Dick’s robot/human shootout narrative have a systems-theoretical and neocybernetic inflection that echoes the historical and epistemological shifts involving the increasing imbrication and inextricability of human and technical life in the postwar era that are already, to a large extent, Dick’s subject-matter in the talk in which that shootout appears.[2]

Yet it’s worth emphasising that Dick’s undermining of categories such as “reality,” or “the human” is never undertaken as a purely intellectual game, a playing with category and paradox for its own sake. With Dick, the android/human question—much as with Donna Haraway’s cyborgs—is always-already an ethical and political question, rather than a matter of seeking to define physically or biologically what constitutes a human and what does not. What matters most to Dick is what he calls the process of androidization. Echoing various philosophical and political registers in which such processes have been elaborated, from Marxian accounts of alienation, through Heidegger’s elaboration of modern technology’s enframing of life as “standing reserve” [Bestand] (the human reduced to “human resource”), to Agamben’s discussions of “bare life,” this is the way the living are transformed into “instruments, means rather than ends […] in the sense that although biological life continues, metabolism goes on, the soul—for lack of a better term—is no longer there or at least no longer active” (Dick, 1995: 187). Becoming-android, or being androidized, means becoming predictable, obedient. In this sense (to give just one brief contemporary example) one would say that Google’s algorithms “androidize” you not by integrating information networks increasingly into your daily life, but by rendering your activity, your habits, your free choices, predictable, quantifiable, and ultimately manipulable (cf. Epstein and Robertson 2015)—as well as making them useful for other purposes beyond the intentional/conscious sphere which you took to be their context. In that same speech, Dick gives a similar example from his own era, which illustrates how, from a communications technology perspective, this type of androidization can come to be presented as desirable. He cites the vision of Harold Osborne, former chief engineer of AT&T:

“Whenever a baby is born anywhere in the world, he is given at birth a telephone number for life. As soon as he can talk, he is given a watch-like device with ten little buttons on one side and a screen on the other. When he wishes to talk with anyone in the world, he will pull out the device and punch on the keys the number. Then, turning the device over, he will hear the voice of his friend and see his face on the screen, in color and in three dimensions. If he does not see him and hear him, he will know that his friend is dead.” (cited in Dick 1995)

Examples like this do not simply highlight the historico-social process of androidization (or what, in other registers, might be termed mechanization, alienation, dehumanization, etc): they also remind us that this process, or the tendency not only to permit it, but to foster or enact it, is a common, if not defining trait of what we have tended to think of as human (i.e. human being, nature, society, etc.) According to Dick’s revised definition of androidization, it would necessarily be a quality of the creatures we (“humans”) have generally tended to identify as “humans” throughout their existence: it has always been possible, and in general common, for humans to act “inhumanly” (which thus means very humanly) towards one another—to mechanize, to exploit, subjugate, treat as ends, instruments—in the process themselves becoming machine-like, without empathy, determinist and ruthless. It is why slavery has seemingly always been—and still is—such a huge aspect of human society.

This is also why, I think, we can locate androidization in the really early science fiction—by which I don’t mean, just for the moment, Jules Verne and H.G. Wells, or even Frankenstein; I’m thinking, rather, of ancient cosmogonies—the myths human cultures created millennia ago about their own origins. For instance, whereas the Book of Genesis tells us that God created Man (sic) in his (sic) own image, older cosmogonies such as Hesiod’s present humans as fundamentally labour-machines, destined to toil throughout their existence—anticipating our modern sci-fi narratives of the creation of androids, robots, replicants as labour-saving devices. This relationship is explicit in the nearly 4,000-year-old Mesopotamian flood narrative Atrahasis, an influence on the much later biblical story of the Flood: here, humans are explicitly constructed by the gods in order to carry out work on their behalf. When the humans become too populous and noisy, the tyrannical head of the gods, Enlil, plans the flood to wipe them out; but the kinder god Enki intercedes and has the human Atrahasis construct a huge boat to save as many people as possible.

In such accounts, humans are effectively constructed as “android” from the outset, in the sense that they are artificial constructs conceived as labour-saving devices for a race that regard themselves as superior, and in that they approximate, but do not fully correspond to, humans’ dominant (self-)definitions of the human: they are, rather, human-like—the literal meaning of “android.”[3] It is only by the time such narratives have been religiously reworked, that Genesis has God creating the human in his own image as god-like, theoid (cf. Burton 2015: 157-161).

Where Enki and Atrahasis show humans to have some kind of value over and above their instrumental usefulness, Noah’s covenant with God, or Deucalion’s survival of the flood sent by Zeus in response to an assault on the gods (as narrated in Ovid’s cosmogony), see humanity rise above the base, warlike, destructive nature that has come to characterize it. In different ways, the stories all tell of humans overcoming—and thus position contemporary humans as having long ago overcome—their “android-ness,” their inhuman or less-than-human character—having become, we may say, “post-android.”

If so many of these cosmogonies present humans as having overcome their android natures, the modern dream of the artificial humanoid may be considered a kind of return to this ancient set of concerns, rather than something entirely new, belonging to a modern, hyper-technological era. Almost as soon as this dream begins to be fleshed out in narrative and practice, in science fiction as well as robotics, artificial intelligence research, cybernetics, etc., it is recast as a nightmare. For, almost inevitably, the expectation arises that the androids will at some point revolt, rejecting precisely the android—i.e. subhuman—status imposed on them by their creators, who have unjustifiably (given their continued and long-running oppressive actions) attributed to themselves a theoid status. In narratives dealing with such scenarios, from Karel Čapek’s 1920 play R.U.R., through both Battlestar Galactica TV series and the Terminator films, to Dan Simmons “Hyperion” novels and recent films like Transcendence (2014) and Avengers: Age of Ultron (2015), the threatened extinction of the human race becomes a central concern. A common (though certainly not ubiquitous) effect of such narratives is to re-enshrine humanity’s self-conception as having a unique, ultimately superior metaphysical and ethical status that is to be defended at all costs. Dick’s science-fictional treatments of the android/human theme, however, more than most, point to the illusionary character of the human’s self-observation or self-description as already post- or non-android. In novels such as Martian Time-Slip (1964), The Three Stigmata of Palmer Eldritch (1964), The Simulacra (1964), Do Androids Dream of Electric Sheep? (1967) and We Can Build You (1972), not to mention countless short stories, Dick uses the figure of the android to explore the process of androidization as an always-already “human” trait which is only very rarely risen above—rather than, as is more often the case, to defer or transfer it on to one or many nonhuman adversaries. And when an entity in his fiction does display post-android qualities that come close to the ideal of the human, it is often an entity that most established biological or material categories would exclude from the realm of the human—such as the robotic poet-theologian Willis of Galactic Pot-Healer, or Lord Running-Clam, the Ganymeadean slime mold of Clans of the Alphane Moon (1964).

Thus, whereas one way to understand the posthuman is in terms of the entry of the so-called human species into a world of humanoid machines, cyborgs, digitally networked environments, artificial intelligence, as well as hybrid forms of life and other “nonhuman” agencies, from another, more Dickian perspective, one could argue that it is the android that precedes the human, or has historically coincided with it—and that to become posthuman would require first coming closer to the non-androidizing, life-fostering form that the “human” has historically claimed for itself, but seldom attained. In other words, becoming posthuman would mean giving up on the illusion of human superiority, of being “more-than-android” (and indeed more-than-animal)—and in an ethical, rather than purely materialist or ontological sense. Such a (self-)understanding is often identified with a posthumanist perspective. Dick offers us one way (perhaps many ways) of exploring how such a perspective may be conceived not as an entry into a world filled with androids, but an exit from the one we have been making and remaking for ourselves ever since we began to differentiate ourselves from the rest of the universe.

***

James Burton is a fellow at the Institute for Cultural Inquiry (ICI) in Berlin and the author of The Philosophy of Science Fiction: Henri Bergson and the Fabulations of Philip K. Dick (Bloomsbury, 2015).

Works Cited

Burton, James. The Philosophy of Science Fiction. Henri Bergson and the Fabulations of Philip K. Dick. London: Bloomsbury, 2015.

Clarke, Bruce. Neocybernetics and Narrative. Minneapolis: University of Minnesota Press, 2014.

Dick, Philip K. “The Android and the Human.” 1972. The Shifting Realities of Philip K. Dick. Ed. Lawrence Sutin. New York: Vintage, 1995. 183-210.

Epstein, Robert and Ronald E. Robertson. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences of the United States of America 112.3 (2015): 4512-4521. doi: 10.1073/pnas.1419828112.

Ryle, Gilbert. The Concept of Mind. 1949. Chicago: University of Chicago Press, 2000.

Wolfe, Cary. What is Posthumanism? Minneapolis: University of Minnesota Press, 2010.

[1] Or perhaps we should say, the archetypal or original category-mistake, remembering that the purpose of Gilbert Ryle’s introduction of the term in The Concept of Mind (1949) was to reject what he influentially (for both philosophy and, indirectly, science fiction) referred to as the Cartesian dogma of the “Ghost in the Machine”—the notion that the (human) mind has a reality independent of the material body. The term “category-mistake” is thus from the outset geared towards the erasure of one of the fundamental frameworks used to claim an absolute distinction between humans and machines, i.e. the presence or absence of “mind” as something beyond the “intelligent acts” (behaviour) that an actor can be observed performing.

[2] Indeed, for some thinkers, such as Cary Wolfe, it is precisely these characteristics that distinguish “the posthumanist form of meaning, reason and communication” as “a specifically modern form of self-referential recursivity.” (Wolfe 2010, xx; original emphasis). Bruce Clarke is implicitly making an equivalent point when he asks whether “our intellectual culture” is ready to embrace the “self-reference that embeds every observer […] within their observations of the other” (Clarke 2014: 87).

[3] In fact “android” would literally denote “man-like” (as opposed to gynoid or androgynoid) – though this is arguably apt given the extent to which such gender bias is historically built in to the most dominant definitions of the human.