Select Page

Transhumanism. Eternal life. It’s a fairy tale, something that isn’t remotely possible in the physical world as we know it. Or is it?

Perhaps the key word is remote.

Human bodies are unquestionably finite; they’re born, they exist, and then they wear out and die, some sooner rather than later. But what if the body itself was only a vessel for the intelligence that resides within it? That’s not such a far-fetched prospect, especially today, with artificial “companions” like Siri and Alexa now helping millions of people get through their everyday tasks. But those are distinctly abstract beings; they were never human. So what is this concept called transhumanism, and how does it relate to artificial intelligence?

Transhumanism is defined as the idea that humans should seek to exist beyond their natural lifespans with the aid of technology. Its proponents consider it the next logical step in utilizing technology to better serve humankind’s needs—after all, the promise of eternal life has sustained generations of devout churchgoers. In those cases, however, the people have to take it on faith that they’ll have an existence beyond this one.

Transhumanism seeks to eliminate that question, with the aid of everything from interfacing and prosthetics that could extend the natural lifespan of the body, to nanobots that would take up residence in the bloodstream and monitor the health of the host.

This is where the artificial intelligence (AI) aspect kicks in: At what point is the consciousness cut off from its humanity, becoming essentially machine instead? As the transhumanism movement is still relatively new, this is a question that has yet to be fully addressed, much less answered. Those who oppose the philosophical fear that if humans rely on AI to such a degree, the technology will reach “the singularity” (the moment at which AI surpasses human intelligence) that much faster, rendering humanity obsolete—or wiping it out altogether, a concept long-beloved of the science fiction industry.

While that may indeed be a concern, there are other matters that should be taken into consideration first. The planet isn’t large enough to sustain a population of undying humans, for example, even if they had evolved (or devolved) into another breed of being. Casting moral issues aside, before it extends our lifespans, technology could be put to better use in the meantime.