Toggle light / dark theme

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.

References

Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

23edweinsteinart-master675

“In a letter written in 1871, the Symbolist poet Arthur Rimbaud uttered a phrase that announces the modern age: “‘Je’ est un autre” (“‘I’ is someone else”). Some 69 years later I entered the world as an identical twin, and Rimbaud’s claim has an uncanny truth for me, since I grew up being one of a pair. Even though our friends and family could easily tell us apart, most people could not, and I began life with a blurrier, more fluid sense of my contours than most other folks.”

Read more

Screen Shot 2015-11-21 at 2.59.28 PM

What if our universe is something like a computer simulation, or a virtual reality, or a video game? The proposition that the universe is actually a computer simulation was furthered in a big way during the 1970s, when John Conway famously proved that if you take a binary system, and subject that system to only a few rules (in the case of Conway’s experiment, four); then that system creates something rather peculiar.

Read more

This piece is dedicated to Stefan Stern, who picked up on – and ran with – a remark I made at this year’s Brain Bar Budapest, concerning the need for a ‘value-added’ account of being ‘human’ in a world in which there are many drivers towards replacing human labour with ever smarter technologies.

In what follows, I assume that ‘human’ can no longer be taken for granted as something that adds value to being-in-the-world. The value needs to be earned, it can’t be just inherited. For example, according to animal rights activists, ‘value-added’ claims to brand ‘humanity’ amount to an unjustified privileging of the human life-form, whereas artificial intelligence enthusiasts argue that computers will soon exceed humans at the (‘rational’) tasks that we have historically invoked to create distance from animals. I shall be more concerned with the latter threat, as it comes from a more recognizable form of ‘economistic’ logic.

Economics makes an interesting but subtle distinction between ‘price’ and ‘cost’. Price is what you pay upfront through mutual agreement to the person selling you something. In contrast, cost consists in the resources that you forfeit by virtue of possessing the thing. Of course, the cost of something includes its price, but typically much more – and much of it experienced only once you’ve come into possession. Thus, we say ‘hidden cost’ but not ‘hidden price’. The difference between price and cost is perhaps most vivid when considering large life-defining purchases, such as a house or a car. In these cases, any hidden costs are presumably offset by ‘benefits’, the things that you originally wanted — or at least approve after the fact — that follow from possession.

Now, think about the difference between saying, ‘Humanity comes at a price’ and ‘Humanity comes at a cost’. The first phrase suggests what you need to pay your master to acquire freedom, while the second suggests what you need to suffer as you exercise your freedom. The first position has you standing outside the category of ‘human’ but wishing to get in – say, as a prospective resident of a gated community. The second position already identifies you as ‘human’ but perhaps without having fully realized what you had bargained for. The philosophical movement of Existentialism was launched in the mid-20th century by playing with the irony implied in the idea of ‘human emancipation’ – the ease with which the Hell we wish to leave (and hence pay the price) morphs into the Hell we agree to enter (and hence suffer the cost). Thus, our humanity reduces to the leap out of the frying pan of slavery and into the fire of freedom.

In the 21st century, the difference between the price and cost of humanity is being reinvented in a new key, mainly in response to developments – real and anticipated – in artificial intelligence. Today ‘humanity’ is increasingly a boutique item, a ‘value-added’ to products and services which would be otherwise rendered, if not by actual machines then by humans trying to match machine-based performance standards. Here optimists see ‘efficiency gains’ and pessimists ‘alienated labour’. In either case, ‘humanity comes at a price’ refers to the relative scarcity of what in the past would have been called ‘craftsmanship’. As for ‘humanity comes at a cost’, this alludes to the difficulty of continuing to maintain the relevant markers of the ‘human’, given both changes to humans themselves and improvements in the mechanical reproduction of those changes.

Two prospects are in the offing for the value-added of being human: either (1) to be human is to be the original with which no copy can ever be confused, or (2) to be human is to be the fugitive who is always already planning its escape as other beings catch up. In a religious vein, we might speak of these two prospects as constituting an ‘apophatic anthropology’, that is, a sense of the ‘human’ the biggest threat to which is that it might be nailed down. This image was originally invoked in medieval Abrahamic theology to characterize the unbounded nature of divine being: God as the namer who cannot be named.

But in a more secular vein, we can envisage on the horizon two legal regimes, which would allow for the routine demonstration of the ‘value added’ of being human. In the case of (1), the definition of ‘human’ might come to be reduced to intellectual property-style priority disputes, whereby value accrues simply by virtue of showing that one is the originator of something of already proven value. In the case of (2), the ‘human’ might come to define a competitive field in which people routinely try to do something that exceeds the performance standards of non-human entities – and added value attaches to that achievement.

Either – or some combination – of these legal regimes might work to the satisfaction of those fated to live under them. However, what is long gone is any idea that there is an intrinsic ‘value-added’ to being human. Whatever added value there is, it will need to be fought for tooth and nail.

Aristotle is frequently regarded as one of the greatest thinkers of antiquity. So why didn’t he think much of his brain?

In this brief history of the brain, the GPA explores what the great minds of the past thought about thought. And we discover that questions that seem to have obvious answers today were anything but self-evident for the individuals that first tackled them. And that conversely, sometimes the facts which we simply accept to be true can be blinding, preventing us from making deeper discoveries about our our world and ourselves.

3045

“It has sold millions of copies, is perhaps the greatest novel in the science-fiction canon and Star Wars wouldn’t have existed without it. Frank Herbert’s Dune should endure as a politically relevant fantasy from the Age of Aquarius.”

Read more

One of the biggest existential challenges that transhumanists face is that most people don’t believe a word we’re saying, however entertaining they may find us. They think we’re fantasists when in fact we’re talking about a future just over the horizon. Suppose they’re wrong and we are right. What follows? Admittedly, we won’t know this until we inhabit that space ‘just over the horizon’. Nevertheless, it’s not too early to discuss how these naysayers will be regarded, perhaps as a guide to how they should be dealt with now.

So let’s be clear about who these naysayers are. They hold the following views:

1) They believe that they will live no more than 100 years and quite possibly much less.
2) They believe that this limited longevity is not only natural but also desirable, both for themselves and everyone else.
3) They believe that the bigger the change, the more likely the resulting harms will outweigh the benefits.

Now suppose they’re wrong on all three counts. How are we to think about such beings who think this way? Aren’t they the living dead? Indeed. These are people who live in the space of their largely self-imposed limitations, which function as a self-fulfilling prophecy. They are programmed for destruction – not genetically but intellectually. Someone of a more dramatic turn of mind would say that they are suicide bombers trying to manufacture a climate of terror in humanity’s existential horizons. They roam the Earth as death-waiting-to-happen. This much is clear: If you’re a transhumanist, ordinary people are zombies.

Zombies are normally seen as either externally revived corpses or bodies in a state between life and death – what Catholics call ‘purgatory’. In both cases, they remain on Earth beyond their will. So how does one deal with zombies, especially when they are the majority of the population? There are three general options:

1) You kill them, once and for all.
2) You avoid them.
3) You enable them to be fully alive.

The decision here is not as straightforward as it might seem because the prima facie easiest option (2) requires that there are no resource implications. But of course, zombies require living humans (i.e. potential transhumans) in order to exist in the manner they do, which in turn makes the zombies dangerous; hence (1) has always proved such an attractive option for dealing with zombies. After all, it is difficult to dedicate the resources needed to secure the transhumanist goal of indefinite longevity, if there are zombies trying to anchor your existential horizons in the present to make their own lives as easy as possible.

This kind of problem normally arises in the context of ecological sustainability as ‘care for future generations’: Our greedy habits of mass consumption blind us to the long-term damage it does to the environment. However, the relevant sense of ‘care’ in the transhumanist case relates to sustaining the investment base needed to reach a state of indefinite longevity. It may require diverting public resources from seemingly more pressing needs, such as having a strong national defence — as the US Transhumanist Party presidential candidate Zoltan Istvan thinks. It is certainly true that if people routinely lived indefinitely, then the existential character of ‘the horror of war’ would be considerably reduced, which may in turn decrease both the likelihood and cost of war. Well, maybe…

So what about option (3), which is probably the one that most of us would find most palatable, at least in principle?

Here there is a serious public relations problem, one not so different from development aid workers trying to persuade ‘underdeveloped’ peoples that their lives would be appreciably improved by allowing their societies to be radically re-structured so as to double their life expectancy from 40 to 80. While such societies are by no means perfect and may require significant change to deliver what they promise their members, nevertheless the doubling of life expectancy would mean a radical shift in the rhythm of their individual and collective life cycles – which could prove quite threatening to their sense of identity.

Of course, the existential costs suggested here may be overstated, especially in a world where even poor people have decent access to more global trends. Nevertheless the chequered history of development aid since the formal end of Imperialism suggests that there is little political will – at least on the part of Western nations — to invest the human and financial capital needed to persuade people in developing countries that greater longevity is in their own long-term interest, and not simply a pretext to have them work longer for someone else.

The lesson for us lies in the question: How can we persuade people that extending their lives is qualitatively different from simply extending their zombiehood?