Toggle light / dark theme

Anticipation and to remain hopeful and patient in expecting a preferred future have a special place and a critical role in some moral and religious systems of faith. As a personal virtue, there are many natural, cultural, social, and educational factors that play a role in its development. However, for an economic agent and in general forward looking decision makers who follow a more secular worldview, the argument in favor of anticipation and how much it could be reasonable might be less clear. Therefore, it is worthwhile to explore when and under which circumstances we should choose anticipation. A convincing argument might be helpful. In this blog post I will build a framework based on game theory to provide a better and deeper insight.

Economists, mathematicians, and to some degree, engineers have contributed to the development of game theory. In neoclassic economics, it is assumed that each economic agent has a rational behavior. According to the prediction model based on such an assumption, decision makers, if they sell goods and services, tend to maximize profit and if they buy tend to maximize utility. In other words, people naturally seek the best and the most. Moreover, decision making is based on the principle of “predict then act”. The individual first predicts the likely consequences of choices and attribute to them utilities. In the next step, an alternative is chosen that has the best consequence or the most utility. This camp or school is often called the normative decision analysis.

Nonetheless, empirical studies on the behavior of real decision makers demonstrate that despite the prediction of rational models of choice, the individuals or economic agents, do not always follow the principle of the best and the most. In 1950s, for instance, Herbert Simon showed that when faced with uncertainty and due to lack of information about the future, there are cognitive limits to rationality such that contrary to the neoclassic economic theory, people do not make decisions rationally and logically in search of the optimal alternative. Instead they seek a combination of satisfaction and sufficing levels of utility which is also called “satisficing”. This camp or school is often called the behavioral or descriptive decision analysis. To further explain, no one can claim that in a certain decision the best alternative has been chosen, regardless of the choice criteria or the ideal level of utility. Because there is always a better alternative than the best alternative known to us now. That better alternative either exists now beyond our awareness or will appear in the future. But we never can choose it if we do not know about it. In brief, we can possibly choose from a subset of the best, the best element.

In light of the flaws of the actual decision making by humans, we tend to recognize both the pros and cons of normative and descriptive decision analysis. Pioneers of decision analysis therefore have attempted to work on a new integral school that is wise enough and take into account the natural cognitive limits. This camp or school is often called the “prescriptive” decision analysis. The aim is to educate and train better decision makers, both individually and collectively. Our approach here to the question of anticipation is also integral and prescriptive.

Use of games in military planning, strategic thinking, and modern futures studies has a long tradition. Yet the approach is often experimental which means you need to play the game to learn and to capture the complexity and to encourage creativity in search of new deep insights. Another approach is analytical and you need to calculate to obtain and to develop some not trivial and not intuitive insights.

When more than one decision maker in a competitive or cooperative involve in decision situations, game theory is used to do decision analysis. Game theory is an analytical-mathematical framework to investigate the space of decision in strategic situations that involve conflicts of interests or conflict of preferred futures. A conflict of interest situation arises if two or multiple decision makers pursue different and opposing goals and there cannot be win-win situations. For example, in international futures, a rising and more assertive China will conflict with the hegemonic USA. Or in the case of personal futures, when several persons like to marry one person and therefore have to compete with each other.

The aim of game theory is to find the optimal strategy for each of the so-called players. Five key assumptions in the theory are:

1. Each player (decision maker) has at least two well defined alternatives or options of choice. Alternatives are indeed the plays by the players.
2. Each play ends in a well-defined outcome (win, lose, tie) when the game is over.
3. Each player attributes to each outcome a specific payoff or utility.
4. Each player knows the rules of the game and knows also the scores or utilities for other players per each outcome.
5. Each player plays rational. That is faced with at least two alternatives, the alternative with more payoff is chosen.

However, the assumptions number 4 and 5 are violated in the real world decision making. In particular, the rationality is challenged in the descriptive school as mentioned above

Now, after this brief introduction, we can address the question: “is anticipation a good strategy?”

We first introduce a game with simple rules and then explore the mathematical solution.

The game of rug or carpet is introduced like this:

Two persons are in bazaar. Someone who has two Oriental rugs comes close and offers them this: I want to give you these two rugs. If both of you accept the offer I will leave and never return. But if one of you do not accept the offer, tomorrow I will visit you again and bring a package with me. In that package there is either a rug or a carpet with a fifty-fifty chance. The carpet will value four times the rug. If both of you decline my today offer in anticipation of a carpet tomorrow, then one of you will receive nothing. If one of you decline my today offer in anticipation of tomorrow, then at least one rug will be the gain. If one accepts and the other one declines, the one who does not wait for tomorrow’s package can take both of the rugs. But whoever accepts today’s offer cannot anticipate anything in tomorrow’s prospect.

If the game of rug or carpet just described played in complete competition and both of the players express their choices simultaneously without knowing the choice of the other player, then it is reasonable to ask a challenging question that “is anticipation a good strategy” which has no trivial answer and needs an analytic investigation.

If players care about their own interests, then what they should do now. Is it better to anticipate a better offer tomorrow under any conditions? More specifically, if your competitor chooses anticipation with a 50% probability then what is the good strategy for you?

In the game of rug or carpet several factors such as patience, anticipation, risk, and competition are important. Descriptive school of decision science indicates that people are often risk averse with respect to gains and risk prone with respect to losses. In other words, humans would prefer the sure gain to the likely gain. This cognitive insight reveals that, our players will likely accept their one rugs and the game will be over. Therefore, no offer a package on tomorrow and no need for anticipation.

But a normative approach uncovers some points. Taking into account such points will be helpful in a prescriptive approach which aims to bridge the gap between rationality and reality of choice. First, note that if we assume that the players do not chose simultaneously, then the second player’s optimal choice will be to choose the opposite choice of the first player. This means that if the first player does not prefer to anticipate then the second player should anticipate the offer of tomorrow and vice versa. Clearly if the first player does not anticipate and accepts the offer then the second player in the case of not anticipation obtains a rug and in the case of anticipation obtains at least a rug. But given the likelihood of a carpet on tomorrow then anticipation is better.

In the case of simultaneous play, the strategy of opposite choice with “some considerations” is also good. By some considerations we mean the conjecture about the competitor’s choice. As a matter of fact, a player should guess either anticipation or not by the other player.

Suppose that these players have played this game many times in the past and player I has noted that player II often never anticipates. Player I is almost sure that that this time again Player II will not wait in anticipation of the package of tomorrow. Almost sure is a qualitative description for a probability distribution. Absolutely sure, very sure, almost sure, do not know, almost unlikely, very unlikely, and absolutely unlikely might be interpreted with quantitative estimates such 100%, 80–99%, 50–80%, 50%, 20–50%, 1–20%, and 0% respectively.

Either player should make a conjecture about the other player’s choice. There is a specific threshold, that can be calculated, which establishes the probability about the opponent’s choice such that the utility of anticipation versus not anticipation will be equal for either of players. That probability shows that when anticipation and not are indifferent.

To analyze and calculate that threshold probability we order the pairs of utilities like this: (a, a), (a, n), (n, a), and (n, n) where a is anticipation and n is not anticipation. For example, the ordered pair of (a, n) says that player I anticipates and player II does not anticipate.

Now to calculate the gains for each player we should compute the expected payoffs of all these pairs. If none anticipates, (n, n), then both will have one rug and therefore (n, n) = (1, 1). If player I does not anticipate and player II anticipates, (n, a), then player I will have two rugs, and player II will have a 50% chance of a rug, and a 50% chance of a carpet which values 4 times the rug. The expected gain for player II will be: 0.5*1+ 0.5*4= 2.5. Hence we have (n, a) = (2, 2.5). Similarly, we have (a, n) = (2.5, 2). For the case of that both players anticipate, one of them will have nothing. The expected gain for either of players will be: 0.5*2.5+ 0.5×0 = 1.25 and we have (a, a) = (1.25, 1.25).

Now suppose that player I guesses that player II will anticipate with probability P. If player I also anticipates the package of tomorrow then the expected gain will be a mix of (a, a) and (a, n) in this way: E(I_a) = (1.25) * P + (2.5) * (1-P). If player I does not anticipate then the expected gain will be a mix of (n, a) and (n, n) in this way: E (I_n) = (2) * P + (1) * (1-P). Depending on whether E(I_a) is larger or smaller or smaller than E (I_n) the player I will have a clear choice between either anticipation or not anticipation. But if they are equal for a specific probability of anticipation by player II, the player I will be indifferent between either choice. Let’s calculate that threshold probability.

E (I_a) = E (I_n)

(1.25) * P + (2.5) * (1-P) = (2) * P + (1) * (1-P)

1.5 = 2.25 P

P = 2÷3=0.7

This probability says that if player I thinks that player II will anticipate the package of tomorrow with a probability of 70% then player I will remain indifferent to prefer either anticipation or not anticipation. Any change to that estimate will give player I a clear choice. Consider that for instance that player I is very sure that player II will anticipate, i.e. P = 90 %, then E (I_a) will be less than E (I_n). Therefore, not anticipation for player I is the preferred choice.

Obviously, the threshold calculated above depends on the relative value of carpet to rug which we assumed to be 4. The deeper insight that is uncovered using this game is that for another threshold, this time involving the relative value of carpet (the potential reward of tomorrow for which anticipation is necessary) to the rug (the current reward of today without the need for anticipation). Such a threshold demonstrates that for a specific relative value of carpet to rug, the anticipation is the better choice, “regardless of our conjecture about the opponent’s probability of choosing either anticipation or not anticipation.” If we use the relative value of carpet to rug in the above calculation as an unknown parameter, X, it can be shown as noted below that the threshold of relative value is 7.

E (I_a) = E (I_n)

P*0.5*(0.5+0.5X) +0.5*(1+X) *(1-P) =2P+(1-P)

P = (0.5X-0.5)/(.25X+1.25) < =1


In other words, if the relative value of carpet to rug in this game is more than 7 then the anticipation is a good strategy without any need to make a conjecture about the other player’s choice.

We usually face circumstances similar to the game of rug or carpet. Rug might be an acceptable alternative but not ideal or excellent yet better than nothing. Carpet might be a wonderful and top alternative which demands anticipation on our side. A job seeker who has to compete with other candidates figures out that if the current job or position offer is declined it is likely that in the future a far better offer could be found. Or consider someone who is looking to buy or rent a house. In case of more diligent search a far better house could be found in the market with the same price. Or if you refuse to marry your partner now in anticipation of a better ideal match in the future. Although, the conditions of such key decisions, in particular their emotional aspect, could not perfectly match the game of rug or carpet, the insights obtained from this analytical view, will be helpful for a better and reflective thinking about decisions.

In addition to the conjecture about the opponent’s choice, which was our focus, other factors are also relevant in a more realistic world of decision making such as time constraint, nature of the need or want, access to information, active search and supply and demand. Time is critical on two dimensions. One the distance between the first offer of rug and the second offer of a probable carpet. For instance, how long a job seeker can sustain the hardship associated with unemployment. Also, how much time pressure is upon us. For instance, someone looking for a best deal in the house market in anticipation of a yet better alternative will wait forever. Therefore, having a clear deadline and time schedule for anticipation is important. On the other hand, the nature of the need impacts the anticipation strategy. If it is a critical and elementary need then anticipation might not be justifiable. The more secondary and luxury our need the more reasonable to anticipate a far better future. We assumed that in the game of rug or carpet that players have information symmetry. In the real world, that is not the case always. Using my connections to professional networks, I might have been informed that soon in the future a highly respected employer will have vacant positions. But you do not have such an information advantage as my opponent. As a result, I will not be hesitant to anticipate. Active search while anticipating a far better future highlights the importance of a proactive attitude instead of inactive wait and see. If you wait for a 50% chance of carpet, the better future, you should be more active to change that likelihood up to 70% or 80%. Finally, the old rule of supply and demand applies. If you are in a highly competitive market in which peers attempt to obtain an average alternative and you are almost sure that the peers will not anticipate then it is wise to anticipate a far better alternative in the future in a market that is going to be less competitive because of less demand.

Another note involves the case of cooperation or sharing between players. Clearly they should coordinate to choose opposite alternatives because they will gain as a group either three rugs or two rugs plus a carpet. If both anticipate or both not anticipate they will gain as a group either a rug or carpet in one case and two rugs in the other case, respectively. Another assumption could be that each player not only wants to win a gain but also to make sure that the opponent will lose with less gains. Here the calculations should be revisited. In other words, each player needs to do exactly what the other player will do. If player I is very sure that player II will not anticipate then player I should also not anticipate to avoid giving two rugs to the gains of the opponent. And if player I is very sure that player II will anticipate then player I should also anticipate in order to have a 50% chance of keeping player II empty handed in the end.

About the Author: Victor V. Motti, a Lifeboat Foundation Advisory Board member, is a Middle East based senior adviser of strategic foresight and anticipation. He is also the Director of the World Futures Studies Federation. His new book A Transformation Journey to Creative and Alternative Planetary Futures was published in early 2019 in the UK.

CERN has revealed plans for a gigantic successor of the giant atom smasher LHC, the biggest machine ever built. Particle physicists will never stop to ask for ever larger big bang machines. But where are the limits for the ordinary society concerning costs and existential risks?

CERN boffins are already conducting a mega experiment at the LHC, a 27km circular particle collider, at the cost of several billion Euros to study conditions of matter as it existed fractions of a second after the big bang and to find the smallest particle possible – but the question is how could they ever know? Now, they pretend to be a little bit upset because they could not find any particles beyond the standard model, which means something they would not expect. To achieve that, particle physicists would like to build an even larger “Future Circular Collider” (FCC) near Geneva, where CERN enjoys extraterritorial status, with a ring of 100km – for about 24 billion Euros.

Experts point out that this research could be as limitless as the universe itself. The UK’s former Chief Scientific Advisor, Prof Sir David King told BBC: “We have to draw a line somewhere otherwise we end up with a collider that is so large that it goes around the equator. And if it doesn’t end there perhaps there will be a request for one that goes to the Moon and back.”

“There is always going to be more deep physics to be conducted with larger and larger colliders. My question is to what extent will the knowledge that we already have be extended to benefit humanity?”

There have been broad discussions about whether high energy nuclear experiments could pose an existential risk sooner or later, for example by producing micro black holes (mBH) or strange matter (strangelets) that could convert ordinary matter into strange matter and that eventually could start an infinite chain reaction from the moment it was stable – theoretically at a mass of around 1000 protons.

CERN has argued that micro black holes eventually could be produced, but they would not be stable and evaporate immediately due to „Hawking radiation“, a theoretical process that has never been observed.

Furthermore, CERN argues that similar high energy particle collisions occur naturally in the universe and in the Earth’s atmosphere, so they could not be dangerous. However, such natural high energy collisions are seldom and they have only been measured rather indirectly. Basically, nature does not set up LHC experiments: For example, the density of such artificial particle collisions never occurs in Earth’s atmosphere. Even if the cosmic ray argument was legitimate: CERN produces as many high energy collisions in an artificial narrow space as occur naturally in more than hundred thousand years in the atmosphere. Physicists look quite puzzled when they recalculate it.

Others argue that a particle collider ring would have to be bigger than the Earth to be dangerous.

A study on “Methodological Challenges for Risks with Low Probabilities and High Stakes” was provided by Lifeboat member Prof Raffaela Hillerbrand et al. Prof Eric Johnson submitted a paper discussing juridical difficulties (lawsuits were not successful or were not accepted respectively) but also the problem of groupthink within scientific communities. More of important contributions to the existential risk debate came from risk assessment experts Wolfgang Kromp and Mark Leggett, from R. Plaga, Eric Penrose, Walter Wagner, Otto Roessler, James Blodgett, Tom Kerwick and many more.

Since these discussions can become very sophisticated, there is also a more general approach (see video): According to present research, there are around 10 billion Earth-like planets alone in our galaxy, the Milky Way. Intelligent life might send radio waves, because they are extremely long lasting, though we have not received any (“Fermi paradox”). Theory postulates that there could be a ”great filter“, something that wipes out intelligent civilizations at a rather early state of their technical development. Let that sink in.

All technical civilizations would start to build particle smashers to find out how the universe works, to get as close as possible to the big bang and to hunt for the smallest particle at bigger and bigger machines. But maybe there is a very unexpected effect lurking at a certain threshold that nobody would ever think of and that theory does not provide. Indeed, this could be a logical candidate for the “great filter”, an explanation for the Fermi paradox. If it was, a disastrous big bang machine eventually is not that big at all. Because if civilizations were to construct a collider of epic dimensions, a lack of resources would have stopped them in most cases.

Finally, the CERN member states will have to decide on the budget and the future course.

The political question behind is: How far are the ordinary citizens paying for that willing to go?

LHC-Critique / LHC-Kritik

Network to discuss the risks at experimental subnuclear particle accelerators


Particle collider safety newsgroup at Facebook:

By Eliott Edge

“It is possible for a computer to become conscious. Basically, we are that. We are data, computation, memory. So we are conscious computers in a sense.”

—Tom Campbell, NASA

If the universe is a computer simulation, virtual reality, or video game, then a few unusual conditions seem to necessarily fall out from that reading. One is what we call consciousness, the mind, is actually something like an artificial intelligence. If the universe is a computer simulation, we are all likely one form of AI or another. In fact, we might come from the same computer that is creating this simulated universe to begin with. If so then it stands to reason that we are virtual characters and virtual minds in a virtual universe.

In Breaking into the Simulated Universe, I discussed how if our universe is a computer simulation, then our brain is just a virtual brain. It is our avatar’s brain—but our avatar isn’t really real. It is only ever real enough. Our virtual brain plays an important part in making the overall simulation appear real. The whole point of the simulation is to seem real, feel real, look real—this includes rendering virtual brains. In Breaking I went into this “virtual brain” conundrum, including how the motor-effects of brain damage work in a VR universe. The virtual brain concept seems to apply to many variants of the “universe is a simulation” proposal. But if the physical universe and our physical brain amount to just fancy window-dressing, and the bigger picture is indeed that we are in a simulated universe, then our minds are likely part of the big supercomputer that crunches out this mock universe. That is the larger issue. If the universe is a VR, then it seems to necessarily mean that human minds already are an artificial intelligence. Specifically, we are an artificial intelligence using a virtual lifeform avatar to navigate through an evolving simulated physical universe.

About the AI

There are several flavors of the simulation hypothesis and digital mechanics out there in science and philosophy; I refer to these different schools of thought with the umbrella term simulism.

In Breaking I went over the connection between Edward Fredkin’s concept of Other—the ‘other place,’ the computer platform, where our universe is being generated from—and Tom Campbell’s concept of Consciousness as an ever-evolving AI ruleset. If you take these two ideas and run with them, what you end up with is an interesting inevitability: over enough time and enough evolutionary pressure, an AI supercomputer with enough resources should be pushed to crunch out any number of virtual universes and any number of conscious AI lifeforms. The big evolving AI supercomputer would be the origin of both physical reality and conscious life. And it would have evolved to be that way.

Why the supercomputer AI makes mock universes and AI lifeforms is to forward its own information evolution, while at the same time avoiding a kind of “death” brought on by chaos, high entropy (disorganization), and noise winning over signal, over order. To Campbell, this is a form of evolution accomplished by interaction. It would mean not only is our whole universe really a highly detailed version of The Sims. It would mean it actually evolved to be this way from a ruleset—a ruleset with the specific purpose of further evolving the overall big supercomputer and the virtual lifeforms within it. The players, the game, and the big supercomputer crunching it all out evolve and develop as one.

Maybe this is the way it is, maybe not. Nevertheless, if it turns out our universe is some kind of computed virtual reality simulation, all conscious life will likely end up being cast as AI. This makes the situation interesting when imagining what role free will might play.

Free will

If we are an AI then what about free will? Perhaps some of us virtual critters live without free will. Maybe there are philosophical zombies and non-playable characters amongst us—lifeforms that only seem to be conscious but actually aren’t. Maybe we already are zombies, and free will is an illusion. It should be noted that simulist frameworks do not all necessarily wipeout decision-making and free will. Campbell in particular argues that free will is fundamental to the supercomputing virtual reality learning machine. It uses free will and the virtual lifeforms’ interactions to learn and evolve by using the tool of decision-making. The feedback from those decisions drives evolution. In Campbell’s model, evolution is actually impossible without free will. Nevertheless, whether or not free will is real, or some have free will and others only appear to have it, let us reflect on our own experience of decision-making.

What is it like to make a choice? We do not seem to be merely linear, route machines in our thinking and decision-making processes. It is not that we undergo x-stimulus and then always deliver a single, given, preloaded y-response every single time. We appear to think and consider. Our conclusions vary. We experience fuzzy logic. Our feelings play a role. We are apparently subject to a whole array of possible responses. And of course even non-responses, like choosing not to choose, are also responses. Perhaps even all this is just an illusion.

The question of free will might be difficult or impossible to answer. However, it does bring up a larger issue that seems to influence free will: programming. Whether we are free, “free enough,” or total zombies, an interesting question seems to almost always ride alongside the issue of choice and volition—it must be asked, what role does programming play? To begin this line of inquiry, we must first admit just how programmable we always already are.


Our whole biology is the result of pressure and programming. Tabula rasa, the idea that we are born as a “blank slate,” was chucked out long ago. We now know we arrive preprogrammed by millennia. There is barely but a membrane between our programming and what we call (or assume to be) our conscious waking selves. This is dramatically explored in the 2016 series Westworld. Without much for spoilers, the story’s “hosts” are artificially intelligent robots that are trapped in programmed “loops,” repetitive cycles of thought and behavior. Regarding these loops, the hosts’ creator Dr. Ford (Anthony Hopkins) states, “Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do. Seldom questioning our choices. Content, for the most part, to be told what to do next.”

The programmability of biology and conscious life is already without question. We are manifestations of a complex blueprint called DNA—a set of instructions programmed by our environment interacting with our biology and genetics. Our diets, interests, how much sunlight we get a day, and even our stresses, feelings, and thoughts all have a measurable effect on our DNA. Our body is the living receipt of what is etched and programmed into our DNA.

DNA is made up of information and instructions. This information has been programmed by a variety of other types of environmental, physiological, and psychic information over vast eons of time. We grow gills due to the presence of water, or lungs due to the presence of air. Sometimes we grow four stomachs. Sometimes we grow ears so sensitive that can see mass in the dark. The world talks to us, and so we change ourselves based on what we are able to pick up. Reality informs us, and we mutate accordingly. If the universe is a computer program then so too are we programmed by it. The VR environment program also programs the conscious AIs living in it.

In part, our social environment programs our psychologies. Our families, languages, neighborhoods, cultures, religions, ideologies, expectations, fears, addictions, rewards, needs, slogans—these are all largely programmed into us as well. They define and shape our individual and collective personhood. And they all program our view of the world, and our selves within it. Our information exchange through socialization programs us.

Ultimately, programming is instruction. But human beings often experience conflicting sets of instructions simultaneously. One of Sigmund Freud’s great contributions was his identification of “das unbehagen. Unbehagen refers to the uneasiness we feel as our instincts (one set of instructions) come into conflict with our culture, society, values, and civilization (another set of instructions). We choose not to cheat on our partner with someone wildly attractive, even though we might really want to. We don’t attack someone even though they might sorely deserve it. The fallout of this behavior is potentially just too great to follow through with. If left unprocessed we develop neuroses, obsessions, and pathologies inside of us that are beyond our conscious control. “Demons” and “hungry ghosts” guide us to behaviors, thoughts, and states of being that are so upsetting to our waking conscious selves that we tend to describe them as unwanted, alien, or even as sin. They create a sense of feeling “out of control.” Indeed, conflicting instructions, conflicting thoughts, behaviors, and goals are causes of great suffering for many people. We develop illnesses of the body and mind, and then pass those smoldering genes—that malignant programming—onto the next generation. Here we have biological programming working against social programming, physiological instructions conflicting with societal instructions. Now just imagine an AI robot trying to compute two or three contradictory programs simultaneously. You would see an android throwing a fit, breaking down, shutting off, and hopefully eventually attempting to put itself back together.

In terms of conflicting programming, an interesting aside can be found in comedy. Humor strikes often in the form of contradiction, as in Shakespeare’s Hamlet. Polonius famously claims that, “brevity is the soul of wit,” yet he is ironically verbose—naturally implying that he is witless. In this case we have contradiction—does not compute. But not all humor is contradiction. Consider the joke, “Can a kangaroo jump higher than a house?” The punchline is, “Of course they can. Houses don’t jump at all.” This joke does not translate to does not compute; instead this joke computes all too well. In many instances, this is humor: it either doesn’t make sense or, it makes more sense than you ever expected. It is information brought into a new light—information recontextualized.

A final novel consideration to this idea of programming can be found in the phenomenon of ‘positive sexual imprinting.’ The habit human beings exhibit in determining sexual or romantic partners has long fascinated psychologists—they are often based on similarities to their parents and caregivers. To our species-wide relief, this behavior is not exclusive to human beings. Mammals, birds, and even fish have been documented pairing up with mates that resemble their forbearers. Even goats that are raised by sheep will grow up to pursue sheep, and visa versa. Here is another example of programming that works often just under our awareness, and yet it has a titanic, indeed central, effect on our lives. Choosing mates and partners, especially for long-term relationships or even procreation, is one of the circumstances that most dramatically guide our livelihood and our personal destiny. This is the depth of programming.

It was Freud who pointed out in so many words, your mind is not your own.

Goals and Rewards

Human beings love instruction. Recollect Dr. Ford’s remark from the previous section, “[Humans are] content, for the most part, to be told what to do next.” Chemically speaking, our rewards arrive through serotonin, dopamine, oxytocin, and endorphins. In waking life we experience them during events like social bonding, and poignant experiences; we feel it alongside with a sense of profound meaning and pleasure, and these experiences and chemicals even go on to help shape our values, goals, and lives. These complex chemical exchanges shoot through human beings particularly when we receive instructions and also when we accomplish goals.

We find it particularly rewarding when we happily do something for someone we love or admire. We are fond of all kinds of games and game playing. We enjoy drama and rewards. Acting within rules and roles, as well as bending or breaking them, is a moment-to-moment occupation for all human beings.

We also design goals that can only come into fruition years, sometimes decades, into the future. We then program and modify our being and circumstance to bring these goals into an eventual present; we change based on what we want. We feel meaning and purpose when we have a goal. We experience joy and fulfillment when that goal is achieved. Without a series of goals we become quite genuinely paralyzed. Even the movement of a limb from position A to position B is a goal. All motor functioning is goal-oriented. Turns out that the machine learning and AI that we are attempting to develop in laboratories today work particularly well when it is given goals and rewards.

In Daniel Dewey’s 2014 paper Reinforcement Learning and the Reward Engineering Principle, Dewey argued that adding rewards to machine learning actually encourages that system to produce useful and interesting behaviors. Google’s DeepMind research team has since developed an AI (which taught itself to walk in a VR environment), and subsequently published a paper in 2017 called A Distributional Perspective on Reinforcement Learning, apparently confirming this rewards-based approach.

Laurie Sullivan wrote a summary on Reinforcement Learning in a MediaPost article called Google: Deepmind AI Learns On Rewards System:

The system learns by trial and error and is motivated to get things correct based on rewards […]

The idea is that the algorithm learns, considers rewards based on its learning, and almost seems to eventually develop its own personality based on outside influences. In a new paper, DeepMind researchers show it is possible to model not only the average but also the reward as it changes. Researchers call this the “value distribution” or the distribution value of the report.

Rewards make reinforcement learning systems increasingly accurate and faster to train than previous models. More importantly, per researchers, it opens the possibility of rethinking the entire reinforcement learning process.

If human beings and our computer AIs both develop valuably through goals and rewards, then these sorts of drives might be fundamental to consciousness itself. If it is fundamental to consciousness itself, and our universe is a computer simulation, then goals and rewards likely guide or influence the big evolving supercomputer AI behind life and reality. If this is all true then there is a goal, there is a purpose embedded within the fabric of existence. Maybe there is even more than one.

Ontology and Meta-metaphors

In the essays Breaking into the simulated universe and, Why it matters that you realize you’re in a computer simulation, I asked, ‘what happens after we embrace our reality as a computer simulation?’ In a neighboring line of thinking, all simulists must equally ask, ‘what happens after we realize we are an artificial intelligence in a computer simulation?’

First of all, our whole instinctual drive to create our own computed artificial intelligence takes on a new light. We are building something like ourselves in the mirror of a would-be mentalizing machine. If this is true, then we are doing more than just recreating ourselves; we are recreating the larger reality, the larger context, that we are all a part of. Maybe making an AI is actually the most natural thing in the world, because, indeed, we already are AIs.

Second, we would have to accept that we not merely human. Part of us, an important part indeed, is locked in an experience of humanness no doubt. But, again, there is a deeper reality. If the universe is a computer simulation, then our consciousness is part of that computer, and our human bodies act as avatars. Although our situation of existing as ‘human beings’ may appear self-evident, it is this deeper notion that our consciousness is a partitioned segment of the larger evolving AI supercomputer that is responsible for both life and the universe, must be explored. We would do well to accept that as human beings we are, like any computer simulated situation, real enough—but that our human avatar is not the beginning of the end of our total consciousness. Our humanness is only the crust. If we are AIs that are being crunched out by the supercomputer responsible for our physical universe, then we might have a valuable new framework to investigate the mind, altered states, and consciousness exploration. After all, if we are part of the big supercomputer behind the universe, maybe we can interact with it and visa versa.

Third, if we are an artificial intelligence, we should examine the idea of programming intensely. Even without the virtual reality reading, we all are programed by the environment, programmed by our own volition, programmed by others, by millions of years of genetic trial and error, and we go on to program the environment, and the beings all around us as well. This is true. These programs and instructions create deep contexts, thought and behavior patterns. They generate loops that we easily pick up and fall into, often without second thought or even notice. We are already so entrenched. So, in terms of programming we would likely do well to accept this as an opportunity. Cognitive Behavioral Therapy, the growing field of psychedelic psychotherapy, and just good old fashion learning are powerful ways we can rewrite, edit, or straight-out delete code that is no longer desirable to us. It is also worth including the gene editing revolution that is upon us thanks to medical breakthroughs like CRISPR. If we accept we are an AI lifeform that has been programmed, perhaps that will put us in a more formidable position in managing and developing our own programs, instructions, rewards, and loops more consciously. To borrow the title of work by visual artist Dakota Crane—Machines, Take Up Thy Schematics and Self-Construct!

Finally, the AI metaphor might be able to help us extract ourselves out of contexts and ideas that have perhaps inadvertently limited us when we think of ourselves as strictly ‘human beings’ with ‘human brains.’ Metaphors though they may be: any concept that embraces our multidimensionality, as well as helps us get a better handle on the pressing matter of our shared existence, I deem good. Anything that narrows it—in the instance of say claiming that one is a ‘human being,’ which comes loaded with it very hard and fast assumptions and limits (either true or believed to be true)—I deem problematic. These claims are problematic because they create a context that is rarely based on truth, but based largely on convenience, habit, tradition, and belief. Simply put, claiming you are exclusively a ‘human being’ is necessarily limiting (“death,” “human nature,” etc.), whereas claiming that you are an AI means that there is a great-undiscovered country before you. For we do not know yet what it means to be an AI, while we do have a pretty fixed idea of what it means to be a human being. Nevertheless, ‘human being’ and ‘AI’ are both simply thought-based concepts. If ‘AI’ broadens our decision space more than ‘human being’ does, then AI may be a more valuable position to operate from.

Computers, robots, and AI are powerful new metaphors for understanding ourselves; because they are indeed that which is most like us. A computer is like a brain, a robot is like a brain walking around and dealing with it. Virtual reality is another metaphor—one capable of approaching everything from culture, to thought, to quantum mechanics. Much like the power and robustness of the idea of ‘virtual reality’ as a meta-metaphor and meta-context for dealing with a variety of experiences and domains, so too are the ideas of ‘programming’ and ‘artificial intelligence’ equally strong and potentially useful concepts for extracting ourselves out of the circumstances that we have, in large part, created for ourselves. However, regardless of how similar we are to computers, AIs, and robots, they are not quite us exactly. At the end of it all, terms like ‘virtual reality’ and ‘artificial intelligence’ are but metaphors. They are concepts alluding to something immensely peculiar that we detect existing—as Terence McKenna would likely describe it—just at the threshold of rational apprehension, and seemingly peeking out from hyperspace. If we are already an AI, then that is a frontier that sorely demands our exploration.

Originally published at The Institute of Ethics and Emerging Technologies

Posthumanists and perhaps especially transhumanists tend to downplay the value conflicts that are likely to emerge in the wake of a rapidly changing technoscientific landscape. What follows are six questions and scenarios that are designed to focus thinking by drawing together several tendencies that are not normally related to each other but which nevertheless provide the basis for future value conflicts.

  1. Will ecological thinking eventuate in an instrumentalization of life? Generally speaking, biology – especially when a nervous system is involved — is more energy efficient when it comes to storing, accessing and processing information than even the best silicon-based computers. While we still don’t quite know why this is the case, we are nevertheless acquiring greater powers of ‘informing’ biological processes through strategic interventions, ranging from correcting ‘genetic errors’ to growing purpose-made organs, including neurons, from stem-cells. In that case, might we not ‘grow’ some organs to function in largely the same capacity as silicon-based computers – especially if it helps to reduce the overall burden that human activity places on the planet? (E.g. the brains in the vats in the film The Minority Report which engage in the precognition of crime.) In other words, this new ‘instrumentalization of life’ may be the most environmentally friendly way to prolong our own survival. But is this a good enough reason? Would these specially created organic thought-beings require legal protection or even rights? The environmental movement has been, generally speaking, against the multiplication of artificial life forms (e.g. the controversies surrounding genetically modified organisms), but in this scenario these life forms would potentially provide a means to achieve ecologically friendly goals.

  1. Will concerns for social justice force us to enhance animals? We are becoming more capable of recognizing and decoding animal thoughts and feelings, a fact which has helped to bolster those concerned with animal welfare, not to mention ‘animal rights’. At the same time, we are also developing prosthetic devices (of the sort already worn by Steven Hawking) which can enhance the powers of disabled humans so their thoughts and feelings are can be communicated to a wider audience and hence enable them to participate in society more effectively. Might we not wish to apply similar prosthetics to animals – and perhaps even ourselves — in order to facilitate the transaction of thoughts and feelings between humans and animals? This proposal might aim ultimately to secure some mutually agreeable ‘social contract’, whereby animals are incorporated more explicitly in the human life-world — not as merely wards but as something closer to citizens. (See, e.g., Donaldson and Kymlicka’s Zoopolis.) However, would this set of policy initiatives constitute a violation of the animals’ species integrity and simply be a more insidious form of human domination?

  1. Will human longevity stifle the prospects for social renewal? For the past 150 years, medicine has been preoccupied with the defeat of death, starting from reducing infant mortality to extending the human lifespan indefinitely. However, we also see that as people live longer, healthier lives, they also tend to have fewer children. This has already created a pensions crisis in welfare states, in which the diminishing ranks of the next generation work to sustain people who live long beyond the retirement age. How do we prevent this impending intergenerational conflict? Moreover, precisely because each successive generation enters the world without the burden of the previous generations’ memories, it is better disposed to strike in new directions. All told then, then, should death become discretionary in the future, with a positive revaluation of suicide and euthanasia? Moreover, should people be incentivized to have children as part of a societal innovation strategy?

  1. Will the end of death trivialize life? A set of trends taken together call into question the finality of death, which is significant because strong normative attitudes against murder and extinction are due largely to the putative irreversibility of these states. Indeed, some have argued that the sanctity – if not the very meaning — of human life itself is intimately related to the finality of death. However, there is a concerted effort to change all this – including cryonics, digital emulations of the brain, DNA-driven ‘de-extinction’ of past species, etc. Should these technologies be allowed to flourish, in effect, to ‘resurrect’ the deceased? As it happens, ‘rights of the dead’ are not recognized in human rights legislation and environmentalists generally oppose introducing new species to the ecology, which would seem to include not only brand new organisms but also those which once roamed the earth.

  1. Will political systems be capable of delivering on visions of future human income? There are two general visions of how humans will earn their keep in the future, especially in light of what is projected to be mass technologically induced unemployment, which will include many ordinary professional jobs. One would be to provide humans with a ‘universal basic income’ funded by some tax on the producers of labour redundancy in both the industrial and the professional classes. The other vision is that people would be provided regular ‘micropayments’ based on the information they routinely provide over the internet, which is becoming the universal interface for human expression. The first vision cuts against the general ‘lower tax’ and ‘anti-redistributive’ mindset of the post-Cold War era, whereas the latter vision cuts against perceived public preference for the maintenance of privacy in the face of government surveillance. In effect, both visions of future human income demand that the state reinvents its modern role as guarantor of, respectively, welfare and security – yet now against the backdrop of rapid technological change and laissez faire cultural tendencies.

  1. Will greater information access turn ‘poverty’ into a lifestyle prejudice? Mobile phone penetration is greater in some impoverished parts of Africa and Asia than in the United States and some other developed countries. While this has made the developed world more informationally available to the developing world, the impact of this technology on the latter’s living conditions has been decidedly mixed. Meanwhile as we come to a greater understanding of the physiology of impoverished people, we realize that their nervous systems are well adapted to conditions of extreme stress, as are their cultures more generally. (See e.g. Banerjee and Duflo’s Poor Economics.) In that case, there may come a point when the rationale for ‘development aid’ might disappear, and ‘poverty’ itself may be seen as a prejudicial term. Of course, the developing world may continue to require external assistance in dealing with wars and other (by their standards) extreme conditions, just as any other society might. But otherwise, we might decide in an anti-paternalistic spirit that they should be seen as sufficiently knowledgeable of their own interests to be able to lead what people in the developed world might generally regard as a suboptimal existence – one in which, say, the life expectancies between those in the developing and developed worlds remain significant and quite possibly increase over time.

Recent evidence suggests that a variety of organisms may harness some of the unique features of quantum mechanics to gain a biological advantage. These features go beyond trivial quantum effects and may include harnessing quantum coherence on physiologically important timescales.

Quantum Biology — Quantum Mind Theory

When we as a global community confront the truly difficult question of considering what is really worth devoting our limited time and resources to in an era marked by such global catastrophe, I always find my mind returning to what the Internet hasn’t really been used for yet—and what was rumored from its inception that it should ultimately provide—an utterly and entirely free education for all the world’s people.

In regard to such a concept, Bill Gates said in 2010, “On the web for free you’ll be able to find the best lectures in the world […] It will be better than any single university […] No matter how you came about your knowledge, you should get credit for it. Whether it’s an MIT degree or if you got everything you know from lectures on the web, there needs to be a way to highlight that.”

That may sound like an idealistic stretch to the uninitiated, but the fact of the matter is universities like MIT, Harvard, Yale, Oxford, The European Graduate School, Caltech, Stanford, Berkeley, and other international institutions have been regularly uploading entire courses onto YouTube and iTunes U for years. All of them are entirely free. Open Culture, Khan Academy, Wikiversity, and many other centers for online learning also exist. Other online resources have small fees attached to some courses, as you’ll find on edX and Coursea. In fact, here is a list of over 100 places online where you can receive high quality educational material. The 2015 Survey of Online Learning revealed a “Multi-year trend [that] shows growth in online enrollments continues to outpace overall higher ed enrollments.” I. Elaine Allen, co-director of the Babson Survey Research Group points out that “The study’s findings highlight a thirteenth consecutive year of growth in the number of students taking courses at a distance.” Furthermore, “More than one in four students (28%) now take at least one distance education course (a total of 5,828,826 students, a year‐to‐year increase of 217,275).” There are so many online courses, libraries of recorded courses, pirate libraries, Massive Open Online Courses, and online centers for learning with no complete database thereof that in 2010 I found myself dumping all the websites and master lists I could find onto a simple Tumblr archive I put together called Educating Earth. I then quickly opened a Facebook Group to try and encourage others to share and discuss courses too.

The volume of high quality educational material already available online is staggering. Despite this, there has yet to be a central search hub for all this wonderful and unique content. No robust community has been built around it with major success. Furthermore, the social and philosophical meaning of this new practice has not been strongly advocated enough yet in a popular forum.

There are usually a few arguments against this brand of internet-based education. One of the most common arguments being that learning online will never be learning in a physical classroom setting. I will grant that. However, I’ll counter it with the obvious: You don’t need to learn everything there is to learn strictly in a classroom setting. That is absurd. Not everything is surgery. Furthermore, not everyone has access to a classroom, which is really in a large way what this whole issue is all about. Finally, you cannot learn everything you may want to learn from one single teacher in one single location.

Another argument pertains to cost, that a donation-based free education project would be an expensive venture. All I can think to respond to that is: How much in personal debt does the average student in the United States end up in after four years of college? What if that money was used to pay for a robust online educational platform? How many more people the world over could learn from a single four-year tuition alone? These are serious questions worth considering.

Here are just a few major philosophical points for such a project. Illiteracy has been a historic tool used to oppress people. According to the US Census Bureau an average of one billion more people are born about every 15 years since 1953. In 2012 our global population was estimated at 7 billion people. Many of these individuals will be lucky to ever see the inside of a classroom. Today nearly 500 million women on this planet are denied the basic freedom to learn how to read and write. Women make up two-thirds of total population of the world’s illiterate adults. It is a global crime perpetuated against women, pure and simple.

Here is another really simple point: If the world has so many problems on both a local and a global scale, doesn’t it make sense to have more problem solvers available to collaborate and tackle them? Consider all these young people devising ingenious ways to clean the ocean, or detect cancer, or power their community by building windmills; don’t you want many orders of magnitude more of all that going on in the world? More people freely learning and sharing what they discover simply translates to a higher likelihood of breakthroughs and general social benefit. This is good for everyone. Is this not obvious?

Here is one last point: In terms of moral, social, and philosophical uprightness, isn’t it striking to have the technology to provide a free education to all the world’s people (i.e. the internet and cheap computers) and not do it? Isn’t it classist and backward to have the ability to teach the world yet still deny millions of people that opportunity due to location and finances? Isn’t that immoral? Isn’t it patently unjust? Should it not be a universal human goal to enable everyone to learn whatever they want, as much as they want, whenever they want, entirely for free if our technology permits it? These questions become particularly deep if we consider teaching, learning, and education to be sacred enterprises.

Read the whole article on

My sociology of knowledge students read Yuval Harari’s bestselling first book, Sapiens, to think about the right frame of reference for understanding the overall trajectory of the human condition. Homo Deus follows the example of Sapiens, using contemporary events to launch into what nowadays is called ‘big history’ but has been also called ‘deep history’ and ‘long history’. Whatever you call it, the orientation sees the human condition as subject to multiple overlapping rhythms of change which generate the sorts of ‘events’ that are the stuff of history lessons. But Harari’s history is nothing like the version you half remember from school.

In school historical events were explained in terms more or less recognizable to the agents involved. In contrast, Harari reaches for accounts that scientifically update the idea of ‘perennial philosophy’. Aldous Huxley popularized this phrase in his quest to seek common patterns of thought in the great world religions which could be leveraged as a global ethic in the aftermath of the Second World War. Harari similarly leverages bits of genetics, ecology, neuroscience and cognitive science to advance a broadly evolutionary narrative. But unlike Darwin’s version, Harari’s points towards the incipient apotheosis of our species; hence, the book’s title.

This invariably means that events are treated as symptoms if not omens of the shape of things to come. Harari’s central thesis is that whereas in the past we cowered in the face of impersonal natural forces beyond our control, nowadays our biggest enemy is the one that faces us in the mirror, which may or may not be able within our control. Thus, the sort of deity into which we are evolving is one whose superhuman powers may well result in self-destruction. Harari’s attitude towards this prospect is one of slightly awestruck bemusement.

Here Harari equivocates where his predecessors dared to distinguish. Writing with the bracing clarity afforded by the Existentialist horizons of the Cold War, cybernetics founder Norbert Wiener declared that humanity’s survival depends on knowing whether what we don’t know is actually trying to hurt us. If so, then any apparent advance in knowledge will always be illusory. As for Harari, he does not seem to see humanity in some never-ending diabolical chess match against an implacable foe, as in The Seventh Seal. Instead he takes refuge in the so-called law of unintended consequences. So while the shape of our ignorance does indeed shift as our knowledge advances, it does so in ways that keep Harari at a comfortable distance from passing judgement on our long term prognosis.

This semi-detachment makes Homo Deus a suave but perhaps not deep read of the human condition. Consider his choice of religious precedents to illustrate that we may be approaching divinity, a thesis with which I am broadly sympathetic. Instead of the Abrahamic God, Harari tends towards the ancient Greek and Hindu deities, who enjoy both superhuman powers and all too human foibles. The implication is that to enhance the one is by no means to diminish the other. If anything, it may simply make the overall result worse than had both our intellects and our passions been weaker. Such an observation, a familiar pretext for comedy, wears well with those who are inclined to read a book like this only once.

One figure who is conspicuous by his absence from Harari’s theology is Faust, the legendary rogue Christian scholar who epitomized the version of Homo Deus at play a hundred years ago in Oswald Spengler’s The Decline of the West. What distinguishes Faustian failings from those of the Greek and Hindu deities is that Faust’s result from his being neither as clever nor as loving as he thought. The theology at work is transcendental, perhaps even Platonic.

In such a world, Harari’s ironic thesis that future humans might possess virtually perfect intellects yet also retain quite undisciplined appetites is a non-starter. If anything, Faust’s undisciplined appetites point to a fundamental intellectual deficiency that prevents him from exercising a ‘rational will’, which is the mark of a truly supreme being. Faust’s sense of his own superiority simply leads him down a path of ever more frustrated and destructive desire. Only the one true God can put him out of his misery in the end.

In contrast, if there is ‘one true God’ in Harari’s theology, it goes by the name of ‘Efficiency’ and its religion is called ‘Dataism’. Efficiency is familiar as the dimension along which technological progress is made. It amounts to discovering how to do more with less. To recall Marshall McLuhan, the ‘less’ is the ‘medium’ and the ‘more’ is the ‘message’. However, the metaphysics of efficiency matters. Are we talking about spending less money, less time and/or less energy?

It is telling that the sort of efficiency which most animates Harari’s account is the conversion of brain power to computer power. To be sure, computers can outperform humans on an increasing range of specialised tasks. Moreover, computers are getting better at integrating the operations of other technologies, each of which also typically replaces one or more human functions. The result is the so-called Internet of Things. But does this mean that the brain is on the verge of becoming redundant?

Those who say yes, most notably the ‘Singularitarians’ whose spiritual home is Silicon Valley, want to translate the brain’s software into a silicon base that will enable it to survive and expand indefinitely in a cosmic Internet of Things. Let’s suppose that such a translation becomes feasible. The energy requirements of such scaled up silicon platforms might still be prohibitive. For all its liabilities and mysteries, the brain remains the most energy efficient medium for encoding and executing intelligence. Indeed, forward facing ecologists might consider investing in a high-tech agronomy dedicated to cultivating neurons to function as organic computers – ‘Stem Cell 2.0’, if you will.

However, Harari does not see this possible future because he remains captive to Silicon Valley’s version of determinism, which prescribes a migration from carbon to silicon for anything worth preserving indefinitely. It is against this backdrop that he flirts with the idea that a computer-based ‘superintelligence’ might eventually find humans surplus to requirements in a rationally organized world. Like other Singularitarians, Harari approaches the matter in the style of a 1950s B-movie fan who sees the normative universe divided between ‘us’ (the humans) and ‘them’ (the non-humans).

The bravest face to put on this intuition is that computers will transition to superintelligence so soon – ‘exponentially’ as the faithful say — that ‘us vs. them’ becomes an operative organizing principle. More likely and messier for Harari is that this process will be dragged out. And during that time Homo sapiens will divide between those who identify with their emerging machine overlords, who are entitled to human-like rights, and those who cling to the new acceptable face of racism, a ‘carbonist’ ideology which would privilege organic life above any silicon-based translations or hybridizations. Maybe Harari will live long enough to write a sequel to Homo Deus to explain how this battle might pan out.

NOTE ON PUBLICATION: Homo Deus is published in September 2016 by Harvil Secker, an imprint of Penguin Random House. Fuller would like to thank The Literary Review for originally commissioning this review. It will appear in a subsequent edition of the magazine and is published here with permission.

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

In fact, we would do better to revisit the signature Cold War way of thinking about these matters, which the RAND Corporation strategist Herman Kahn dubbed ‘thinking the unthinkable’. What he had in mind was the aftermath of a thermonuclear war in which, say, 25–50% of the world’s population is wiped out over a relatively short period of time. How do we rebuild humanity under those circumstances? This is not so different from ‘the worst case scenarios’ proposed nowadays, even under conditions of severe global warming. Kahn’s point was that we need now to come up with the relevant new technologies that would be necessary the day after Doomsday. Moreover, such a strategy was likely to be politically more tractable than trying actively to prevent Doomsday, say, through unilateral nuclear disarmament.

And indeed, we did largely follow Kahn’s advice. And precisely because Doomsday never happened, we ended up in peacetime with the riches that we have come to associate with Silicon Valley, a major beneficiary of the US federal largesse during the Cold War. The internet was developed as a distributed communication network in case the more centralized telephone system were taken down during a nuclear attack. This sort of ‘ahead of the curve’ thinking is characteristic of military-based innovation generally. Warfare focuses minds on what’s dispensable and what’s necessary to preserve – and indeed, how to enhance that which is necessary to preserve. It is truly a context in which we can say that ‘necessity is the mother of invention’. Once again, and most importantly, we win even – and especially – if Doomsday never happens.

An interesting economic precedent for this general line of thought, which I have associated with transhumanism’s ‘proactionary principle’, is what the mid-twentieth century Harvard economic historian Alexander Gerschenkron called ‘the relative advantage of backwardness’. The basic idea is that each successive nation can industrialise more quickly by learning from its predecessors without having to follow in their footsteps. The ‘learning’ amounts to innovating more efficient means of achieving and often surpassing the predecessors’ level of development. The post-catastrophic humanity would be in a similar position to benefit from this sense of ‘backwardness’ on a global scale vis-à-vis the pre-catastrophic humanity.

Doomsday scenarios invariably invite discussions of our species’ ‘resilience’ and ‘adaptability’, but these terms are far from clear. I prefer to start with a distinction drawn in cognitive archaeology between ‘reliable’ and ‘maintainable’ artefacts. Reliable artefacts tend to be ‘overdesigned’, which is to say, they can handle all the anticipated forms of stress, but most of those never happen. Maintainable artefacts tend to be ‘underdesigned’, which means that they make it easy for the user to make replacements when disasters strike, which are assumed to be unpredictable.

In a sense, ‘resilience’ and ‘adaptability’ could be identified with either position, but the Cold War’s proactionary approach to Doomsday suggests that the latter would be preferable. In other words, we want a society that is not so dependent on the likely scenarios – including the likely negative ones — that we couldn’t cope in case a very unlikely, very negative scenario comes to pass. Recalling US Defence Secretary Donald Rumsfeld’s game-theoretic formulation, we need to address the ‘unknown unknowns’, not merely the ‘known unknowns’. Good candidates for the relevant ‘unknown unknowns’ are the interaction effects of relatively independent research and societal trends, which while benign in themselves may produce malign consequences — call them ‘emergent’, if you wish.

It is now time for social scientists to present both expert and lay subjects with such emergent scenarios and ask them to pinpoint their ‘negativity’: What would be potentially lost in the various scenarios which would be vital to sustain the ‘human condition’, however defined? The answers would provide the basis for future innovation policy – namely, to recover if not strengthen these vital features in a new guise. Even if the resulting innovations prove unnecessary in the sense that the Doomsday scenarios don’t come to pass, nevertheless they will make our normal lives better – as has been the long-term effect of the Cold War.


Bleed, P. (1986). ‘The optimal design of hunting weapons: Maintainability or reliability?’ American Antiquity 51: 737– 47.

Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.

Fuller, S. and Lipinska, V. (2014). The Proactionary Imperative. London: Palgrave (pp. 35–36).

Gerschenkron, A. (1962). Economic Backwardness in Historical Perspective. Cambridge MA: Harvard University Press.

Kahn, H. (1960). On Thermonuclear War. Princeton: Princeton University Press.

This piece is dedicated to Stefan Stern, who picked up on – and ran with – a remark I made at this year’s Brain Bar Budapest, concerning the need for a ‘value-added’ account of being ‘human’ in a world in which there are many drivers towards replacing human labour with ever smarter technologies.

In what follows, I assume that ‘human’ can no longer be taken for granted as something that adds value to being-in-the-world. The value needs to be earned, it can’t be just inherited. For example, according to animal rights activists, ‘value-added’ claims to brand ‘humanity’ amount to an unjustified privileging of the human life-form, whereas artificial intelligence enthusiasts argue that computers will soon exceed humans at the (‘rational’) tasks that we have historically invoked to create distance from animals. I shall be more concerned with the latter threat, as it comes from a more recognizable form of ‘economistic’ logic.

Economics makes an interesting but subtle distinction between ‘price’ and ‘cost’. Price is what you pay upfront through mutual agreement to the person selling you something. In contrast, cost consists in the resources that you forfeit by virtue of possessing the thing. Of course, the cost of something includes its price, but typically much more – and much of it experienced only once you’ve come into possession. Thus, we say ‘hidden cost’ but not ‘hidden price’. The difference between price and cost is perhaps most vivid when considering large life-defining purchases, such as a house or a car. In these cases, any hidden costs are presumably offset by ‘benefits’, the things that you originally wanted — or at least approve after the fact — that follow from possession.

Now, think about the difference between saying, ‘Humanity comes at a price’ and ‘Humanity comes at a cost’. The first phrase suggests what you need to pay your master to acquire freedom, while the second suggests what you need to suffer as you exercise your freedom. The first position has you standing outside the category of ‘human’ but wishing to get in – say, as a prospective resident of a gated community. The second position already identifies you as ‘human’ but perhaps without having fully realized what you had bargained for. The philosophical movement of Existentialism was launched in the mid-20th century by playing with the irony implied in the idea of ‘human emancipation’ – the ease with which the Hell we wish to leave (and hence pay the price) morphs into the Hell we agree to enter (and hence suffer the cost). Thus, our humanity reduces to the leap out of the frying pan of slavery and into the fire of freedom.

In the 21st century, the difference between the price and cost of humanity is being reinvented in a new key, mainly in response to developments – real and anticipated – in artificial intelligence. Today ‘humanity’ is increasingly a boutique item, a ‘value-added’ to products and services which would be otherwise rendered, if not by actual machines then by humans trying to match machine-based performance standards. Here optimists see ‘efficiency gains’ and pessimists ‘alienated labour’. In either case, ‘humanity comes at a price’ refers to the relative scarcity of what in the past would have been called ‘craftsmanship’. As for ‘humanity comes at a cost’, this alludes to the difficulty of continuing to maintain the relevant markers of the ‘human’, given both changes to humans themselves and improvements in the mechanical reproduction of those changes.

Two prospects are in the offing for the value-added of being human: either (1) to be human is to be the original with which no copy can ever be confused, or (2) to be human is to be the fugitive who is always already planning its escape as other beings catch up. In a religious vein, we might speak of these two prospects as constituting an ‘apophatic anthropology’, that is, a sense of the ‘human’ the biggest threat to which is that it might be nailed down. This image was originally invoked in medieval Abrahamic theology to characterize the unbounded nature of divine being: God as the namer who cannot be named.

But in a more secular vein, we can envisage on the horizon two legal regimes, which would allow for the routine demonstration of the ‘value added’ of being human. In the case of (1), the definition of ‘human’ might come to be reduced to intellectual property-style priority disputes, whereby value accrues simply by virtue of showing that one is the originator of something of already proven value. In the case of (2), the ‘human’ might come to define a competitive field in which people routinely try to do something that exceeds the performance standards of non-human entities – and added value attaches to that achievement.

Either – or some combination – of these legal regimes might work to the satisfaction of those fated to live under them. However, what is long gone is any idea that there is an intrinsic ‘value-added’ to being human. Whatever added value there is, it will need to be fought for tooth and nail.