Geeks Bearing Gifts: An Argument Against Transhumanism

A friend and I were watching a TED talk on the subject of aging. The speaker was Aubrey de Grey, a British biologist who proposed a simple but fascinating new perspective on the age-old subject of getting old—not as a disease, but as a sort of maintenance problem, at one point comparing the human body to a Ford Model T in the way it’s lifespan can be indefinitely prolonged when properly lubricated and when the parts are replaced at the appropriate mileage benchmarks. Aging, understood as the natural accumulation of damage as a byproduct of living, is the single greatest cause of death, and fighting it is therefore an intrinsic good… and eminently achievable. We could achieve immortality, de Grey implied—for who would even need to say explicitly that such a goal was good?

“I’m not sure that’s a goal worth having,” I said.

“What?!?” came my friends’ incredulous response? “How could this not be a good thing?”

I wasn’t able to give a good reason for my hesitancy at the time, or at least to articulate the line of reasoning behind my skepticism. It is my goal here, in this essay to do that.

What is Transhumanism?

Transhumanism is far more than the mere goal of immortality. It is, at its core, a philosophy on what it means to be human—specifically, about the value of being human, of sharing a common experience with others and with our past. Human nature and its’ limitations are not an intrinsic part of the human experience, according to the transhumanist philosophy; they are merely the defaults that we have adapted to over time. According to them, we can transcend these limitations, and these flaws in human nature.

We can transcend human nature, and become posthuman, or transhuman.

In short, transhumanism is a top-down philosophy of progress, centered around progress towards a more ideal state of human-nature that can no longer be properly defined as “human.”

Implicit in the philosophy of transhumanism is a slew of moral evaluations about human nature. Why, after all, would we want to transcend that which is already perfect?

The Presumption of Transhumanism

What I will argue here is not that human beings are already perfect. This is something that I don’t claim to have the knowledge to decide one way or the other. But this is precisely the degree of knowledge that the transhumanist—usually a software developer, a sociologist, an economist, or someone else with a relatively limited sphere of specialized knowledge—is necessarily claiming to possess in their advocacy for transhumanism.

As an extreme demonstration of the problem at work here, the ancient Egyptians believed the brain to be a useless organ, and would, upon the owner’s death, remove it via hook through the deceased’s nasal passage. We may think this is a silly notion of the past, easily overcome by the passage of time and advent of technology. But consider how many people you may know, today, who still believe that you only use 10% of your brain (you in fact use 100%; the parts that are not used, such as those parts associated with limb-control in amputation outpatients, are quickly cannibalized by other parts of the brain).

Even within the higher orders of the scientific establishment, there is much about the body, the brain, what motivates and is good for humans, and about our external environment that we simply don’t understand. The part of the brain that contains the most neurons—the Cerebellum—we barely understand, aside from knowing that it has something to do with coordination and possibly balance.

The transhumanist is taking up the position of Phaedra while debating with Socrates (the man who knew he didn’t know) on the subject of writing. One could argue that perhaps excepting some of the first complex tools, writing was the first transhumanist success. While Phaedra lauded the benefits of the written language, Socrates argued that writing would in fact diminish people’s memory and make them stupid. I think we would be hard-pressed to argue that writing was a net failure for humanity—to the contrary, it has been of great benefit, for the sake of sharing ideas, building upon them, solidifying cultures, etc. But it would be dishonest to say it has been complete success without cost; our memories certainly do appear to be worse than they were 3,000 years ago, when it wasn’t uncommon for a Greek poet to recite an entire epic from memory, or for every African boy within a tribe to know by heart his ancestral history and heritage in intimate detail, twenty or even thirty generations back. Our reliance upon writing has diminished our ability to remember, because we don’t have to remember. There is reason to believe we might be better speakers, and perhaps even more perceptive thinkers and observers, without reliance on text as well.

The Dangers of Dependence

Technology is incredibly convenient for us, and is often very useful, but in many ways makes us more vulnerable by centralizing knowledge. If culture had been preserved organically, rather than fibrously, the fire at the library of Alexandria would have been an inconvenience to its inhabitants, not a tragedy to global culture.

Consider what would happen to the raw functional intelligence of the Western World, by comparison, if just Wikipedia (let’s not get ahead of ourselves and say the whole internet) were to disappear overnight.

Notice that this point need not actually involve the whole internet going offline to be a valid criticism of our reliance upon the internet as an extension of our intelligence. It could just mean one person going through a tunnel underground somewhere and losing reception. We never really know when particular demands upon our strength and intelligence will arise. In an increasingly civilized world, situations where we might be called upon to do something without the aid of technology we have become accustomed to having at our disposal seem increasingly unlikely. And perhaps they really are unlikely; phones have internet, and now watches, even glasses can do things that computers-scientists running room-sized machines in the 50’s and 60’s could barely dream of. But outside of the digital world, the “real world” still exists, and we will still run into it. And it is precisely in these moments of interaction that our survival is most dependent on our ability to succeed.

The dangers of this kind of technological dependence become clear when we look at the close-call of the 2007 digital attack on Estonia, where the entire country was essentially shut-down for several days by a botnet army based out of Russia. All major commercial banks, media outlets, and big-name servers were down for the count…all in a commercialized first-world country.

To clarify—because I can already see the thoughts forming in some readers’ heads “you’re against technology!”—I most certainly am not against technology. I love reading, to return to a previous example, as I hope you do, since you’re perusing through this text. The danger does not lie in the use of technology, but in making ones’ survival dependent upon it (more or less where we are today), let alone establishing our very identity in the technological advances (the project of transhumanism) that may or may not be with us, or function properly, when we need them most. It is one thing to learn to read, and to use it as a tool; it is quite another to upload one’s consciousness into a book.

The dangers of failure are not unique to technology; the body fails, after all. Bones break, organs give out, and your brain refuses to remind you where exactly you put your car keys. It is the precise purpose of transhumanism, at least in part, to transcend these very kinds of limitations. But we should not forget that technology fails too. And where the use—the telos—of a drill, for example, is extraordinarily simple, because the task is straightforward, using technology to modify something like the human brain is quite a different order. What is the telos of the brain? What is its’ function? How does it function? The presumption of transhumanist values—perfect memory, non-violent personality type, IQ of 300, etc—claim an answer, and posit technological solutions for a problem the transhumanist cannot possibly have a complete grasp of.

We are right to feel more secure relying upon a drill then on the psychoactive drugs that meddle with the mind.

The Psychological Dangers

The brain is a product of evolution, which is to say it is the product of a very, very long process of trial and error, of natural selection. As Daniel Dennett observed, evolution is incredible because it is the process by which complexity can arise from simplicity. The complexity does not need to justify itself to simplicity; if it works, it moves on. If it doesn’t, it dies. As a result of this long and arduous journey through the forge of time, the human brain is motivated by some things and not by others. These motives can justifiably said to form an “ought” purely from their own existence: “I don’t like the feeling of getting burned, therefore, touching the stove is bad.” This is an organic—a bottom-up—kind of ought.

There is a small section in Bill McKibbon’s excellent book “Deep Economy” in which he argues that college is often described by former students as the greatest time of their lives, not intrinsically because of the education going on (though that’s great too), but because college is the one, brief period in which we live in approximately the social setting that our brains were designed to live in. The dorm environment is closer, more like an old tribe, than the isolated, familial suburbs of youth, the cold crush of humanity that is a city, or the bizarre cubicle-world of post-college life. Transhumanism, in its’ top-down, ideological approach to what we “ought” to be like, how we ought to live, what ought to motivate us, comes inevitably into conflict with the organic “oughts,” the substance of human happiness and the mysterious drives that motivate us and give us purpose and fulfillment.

The easiest example of this conflict of philosophies is the self-esteem movement, in which psychologists and intellectuals attempted to systematize what makes a student happy and artificially recreate that in order to create happier, more successful students. Their conclusion was that students needed to be built up on a steady diet of encouragement and praise with minimal criticism. But it turned out that what the scientists saw as a cause was merely a correlation: it wasn’t the praise and support that created happy, successful students, but the students’ own sense of accomplishment achieved in the real world by doing things. Children are not as stupid and manipulable as teachers believe, and even when the children didn’t simply see right through the fake praise—praise for merely existing, rather than for something they did—they became fragile and neurotic, living always on the edge of losing the foundations of an identity so easily given to them solely by the subjective opinion of others.

Why use this as an example in an argument about transhumanism? Though there is no technological influence here (another relevant example would be psychoactive drug prescriptions), the philosophical attitude applied to self-improvement is the same: a comprehensive, systematized, top-down command. The ideas of free-market libertarianism are perhaps even more justified when applied to the human mind than they are to the economy, where at least every step of the process is understood at some point by someone (though never by one person), and a comprehensive knowledge is at least theoretically possible. We are nowhere near that level of understanding of the brain, and yet the transhumanism movement is attempting a technologically-driven trailblaze towards what every other utopian ideal has striven for: a top-down idealization that cannot possibly account for all relevant factors. The resulting loss of balance inevitably makes most utopias more dystopian, as in the first attempted matrix:

Achieving a post-scarcity economy, as many transhumanists strive for (and some even presume to already be the case), might very well kill all motivation and interest for humans to go on living. Achieving immortality might dampen the desire to do anything—after all, why not just do it later? The brain works in funny ways, and the dramatic designs of transhumanist dreamers for the new human condition might very well be self-defeating, where the preservation of the species or the general utility of the population is concerned.


But not all transhumanists are even motivated by such good intentions. I have no doubt that the majority, including one of my good friends who inspired this debate, are very much driven by the desire to protect humanity from eradication. Still, some sub-segment seem almost unconcerned with that issue. Latent in the desire to become post-human is an attitude that is anti-human; which looks down upon humanity as inferior. For its’ tribalism, for its’ hierarchical tendencies, for William Golding’s “darkness of men’s hearts,” our tendency towards violence and deception (side-note: such people are even more common in the “animal-lovers” community, in which dogs, cats, and virtually all other animals are seen as morally superior to humans by way of their innocence; such people would make excellent Catholics). These transhumanists look upon adaptations mankind has acquired for survival not as mysteries to be solved and understood, so as to better resolve contradictory impulses and desires in a beneficiary manner, but as rude roadblocks in the way of their ideal, based upon preconceptions of how humanity ought to be, rather than as it is. The intertwined nature of opposites in the human experience—pleasure and suffering, creation and destruction, life and death–completely eludes them.

Given the grand desires of the broader transhumanist movement, and the impediments of such petulant human needs as “freedom” to these elaborate designs, I suspect that more and more transhumanists will give in to anti-humanist sentiments, at least until their frustration makes them give up.

But until then, of course, they’ll be plugging away towards the Singularity, towards immortality, towards a post-scarcity economy, and to a great many other wonderful-sounding ideals whose full effects upon the human consciousness, or even chance at survival, can barely be guessed at.

A Partial Caveat

All of that said, I am very much in favor of technological innovation. People, individually, striving towards and experimenting with, such things as artificial intelligence and immortality, are not doing any harm. Some of their discoveries are even great. Bringing history into focus again, I’ve greatly enjoyed the advent of reading myself, and willingly accept the accompanying opportunity-cost in the development of my memory for it.

Bearing this in mind, there are a great variety of transhumanists who anoint themselves with the label for merely advocating these kinds of technological advances and the exploration of new avenues of self-improvement. These “transhumanists” harbor nothing particularly dangerous, though they will inevitably be wrong most of the time (not because they are stupid, but because human knowledge is limited). They are benign, and greatly beneficial, in fact, because their approach is organic, bottom-up. There is no coercion involved, no grand design that necessitates everybody’s participation, in action or in shared ideals. It allows for natural evolution, rather than forced change. As such, I will not call them transhumanists. They seek first to understand the world, to experiment, explore, and accept changes in the human condition as they arrive—that is to say, as change to the human condition has been going on since time immemorial.

Change, after all, is neither bad nor good intrinsically. We are not the same species now as we were 15,000 years ago, let alone 150,000 years ago; I would say we’ve improved, at least in surviving and thriving in the habitat we’ve been given. We will be different again 15,000 years in the future, and as circumstances change, so too will the human creature. Who knows what these differences will be? I don’t know… but neither does the presumptuous transhumanist, claiming that the future of the species, or the demands of some intrinsically good “progress” (towards what?) necessitate their vision be accepted by all, or else suffer the consequences.

Jack Donovan’s article on the subject is well worth reading.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close Menu