“Wouldn’t you eventually get bored?” Like clockwork, the question arises when I tell someone quixotically, arrogantly, that I plan on living forever. From the limited perspective of 20 years, even the prospect of living another six or seven decades in full color can be impossible to envisage. Hedging, I answer that assuming a world where radical life extension is possible, there will be no telling as to how different the human experience will be from what we know—that is to say, where 200-year-olds won’t merely be stuck playing very, very slow mah-jongg.
Not to speak of the tremendous gains in average life expectancy made over the last few years, radical life extension is no longer the fringe interest of a few high-tech narcissists. In a private competition sponsored by the Methuselah Foundation, genetic researchers engineered a house mouse to perdure for five years, a dramatic improvement over Mus musculus’ average lifespan of one and a half to two. Following the assumption that aging is above all a material process with discrete genetic and biochemical components, we can be sure that it’s possible—in more than theory—to reverse-engineer and manipulate a process that has bounded and shaped every creature’s existence from time immemorial.
In one sense, the chance to extend our lives through pharmacological supplements, hormone therapy, body part replacements, and cryonics could mark Homo sapiens’ most decisive step away from creaturehood. And unsettling though it may be to many readers, the future of the next few generations will be defined by the moral and policy implications of humanity’s technological lurch toward immortality. Anticipating these developments, it’s important to challenge the base assumptions about mortality—there is no life without death, manipulating nature is fundamentally sinful—all the while acknowledging how difficult it is to shake them.
Life extension specialist and futurist Aubrey de Grey, a freewheeling Englishman with an incomparable beard, is convinced that the first person to make it to 150 has already been born, a prospect not so difficult to imagine in light of the progress already made in recent decades toward longer, better-fulfilled lifespans. Although it was once common to speak of 70 as a hard-earned, desirable terminus, First World 70-year-olds are more active than ever before; by virtue of conventional medical progress alone, Japanese women now live an average of 86 years.
And although the chasm between 150 and forever seems unbridgeable, the non-linear pace of progress in the fields of biotechnology and artificial intelligence suggests that by 2162, most if not all of today’s incipient technologies will be in full bloom. Alcor, the leading commercial provider of cryonic suspension services and formerly home to Ted Williams’ head, has to date brought specially-treated dogs back to life from extended periods of deep freeze. If similarly viable treatments are in the medium-term offing for humans, it is likely that many will elect to go on ice until the next stage of life extension technology comes into play.
But needn’t we die? It is hard to comprehend the deaths of loved ones, millennia of historical generations, and organisms of all sizes without some sort of teleology: for the religious, a metaphysical world-to-come, for the irreligious, an evolutionarily generated cycle of life. Setting our time horizons more conservatively, death remains a constant of which we need to make some sort of sense. However, it’s crucial that we appreciate how much longer the list of unpleasant inevitabilities once was: dangerous childbirth, banditry, autocracy, malnutrition, plague. Call it unnatural, but human ingenuity has done away with scourge after scourge, each once understood as a meaningful, natural, part of life’s universal rhythm. And while we might not know exactly how, it’s reasonably clear that life can go on after death.
Returning to the original question—in essence: “Why choose to live forever if forever really just means eternal boredom and senescence?”—it’s apparent that living forever would mean something other than continuing as our current selves. Technology futurists are reasonably certain that at some point in the next century, we’ll be enmeshed in networks of artificial intelligence, bodily modified beyond immediate recognition, and confronted with a new set of identity questions, societal challenges, and existential ambitions.
If I’m fortunate enough to make it to 150, I expect to find a world where caring about ethnic politics in the Middle East, wearing university colors, impressing girls, and investigating my ancestral origins won’t be of much, if any, use. In other words, I expect that I’ll need to invent a new self for a radically new world. More than anything I can imagine, it’ll be a tall order. We have good evolutionary reason to love ourselves to death rather than contemplate being completely reconfigured. It’s a daunting prospect to imagine, but it’s anything but boring.
Joshua B. Lipson ’14, a Crimson editorial writer, is a Near Eastern languages and civilizations concentrator in Winthrop House.
Read more in Opinion
Boycott Le Meridien Cambridge!Recommended Articles
-
Harvard, Teddy Roosevelt, and FootballAs the NFL continues to deal with negotiations between owners and the Player’s Union, it calls for a look to the past when the game of football almost ended, forever.
-
Harvard Smart Woman Securities Introduces Virtual Investment Competition
-
No Need To Beware the GreeksGreek organizations are an ultimately positive force for the Harvard community.
-
Don't Judge Greek Life by Its CoverHarvard needs to lay down its copy of Total Frat Move and reexamine its stance on Greek life and social organization recognition.
-
Review of Life Sciences Concentrations Presented to Faculty
-
On Real LifeWe often separate “Harvard” and “real life.” It makes sense, perhaps, to divorce the two. Real life is a functionally useful term for a place with mortgages and paychecks and uncomfortable family gatherings, a place without three prepared meals a day, without friends who wax philosophical at a dorm party at 2 a.m. But this separation also implicitly allows us to neglect the uneven texture of human experience.