Toggle light / dark theme

— The Atlantic

Facebook has always “manipulated” the results shown in its users’ News Feeds by filtering and personalizing for relevance. But this weekend, the social giant seemed to cross a line, when it announced that it engineered emotional responses two years ago in an “emotional contagion” experiment, published in the Proceedings of the National Academy of Sciences (PNAS).

Since then, critics have examined many facets of the experiment, including its design, methodology, approval process, and ethics. Each of these tacks tacitly accepts something important, though: the validity of Facebook’s science and scholarship. There is a more fundamental question in all this: What does it mean when we call proprietary data research data science?

As a society, we haven’t fully established how we ought to think about data science in practice. It’s time to start hashing that out.

Read more

By Clément Vidal — Vrije Universiteit Brussel, Belgium.

I am happy to inform you that I just published a book which deals at length with our cosmological future. I made a short book trailer introducing it, and the book has been mentioned in the Huffington Post and H+ Magazine.

Inline image 1
About the book:
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end, or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe’s deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution, and intelligence from a novel cosmological framework that should stir debate for years to come.
About the author:
Dr. Clément Vidal is a philosopher with a background in logic and cognitive sciences. He is co-director of the ‘Evo Devo Universe’ community and founder of the ‘High Energy Astrobiology’ prize. To satisfy his intellectual curiosity when facing the big questions, he brings together many areas of knowledge such as cosmology, physics, astrobiology, complexity science, evolutionary theory and philosophy of science.
http://clement.vidal.philosophons.com

You can get 20% off with the discount code ‘Vidal2014′ (valid until 31st July)!

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

MERE COMPUTER OR MULTI-SENSORY ROBOT ?

Computers are undergoing a profound mutation at the moment. Neuromorphic chips have been designed on the way the human brain works, modelling the massively parallel neurological processeses using artificial neural networks. This will enable computers to process sensory information like vision and audition much more like animals do. Considerable research is currently devoted to create a functional computer simulation of the whole human brain. The Human Brain Project is aiming to achieve this for 2016. Does that mean that computers will finally experience feelings and emotions like us ? Surely if an AI can simulate a whole human brain, then it becomes a sort of virtual human, doesn’t it ? Not quite. Here is why.

There is an important distinction to be made from the onset between an AI residing solely inside a computer with no sensor at all, and an AI that is equipped with a robotic body and sensors. A computer alone would have a range of emotions far more limited as it wouldn’t be able to physically interact with its environment. The more sensory feedback a machine could receive, the wide the range of feelings and emotions it will be able to experience. But, as we will see, there will always be fundamental differences between the type of sensory feedback that a biological body and a machine can receive.

Here is an illustration of how limited an AI is emotionally without a sensory body of its own. In animals, fear, anxiety or phobias are evolutionary defense mechanisms aimed at raising our vigilence in the face of danger. That is because our bodies work with biochemical signals involving hormones and neurostransmitters sent by the brain to prompt a physical action when our senses perceive danger. Computers don’t work that way. Without sensors feeding them information about their environment, computers wouldn’t be able to react emotionally.

Even if a computer could remotely control machines like robots (e.g. through the Internet) that are endowed with sensory perception, the computer itself wouldn’t necessarily care if the robot (a discrete entity) is harmed or destroyed, since it would have no physical consequence on the AI itself. An AI could fear for its own well-being and existence, but how is it supposed to know that it is in danger of being damaged or destroyed ? It would be the same as a person who is blind, deaf and whose somatosensory cortex has been destroyed. Without feeling anything about the outside world, how could it perceive danger ? That problem disappear once the AI is given at least one sense, like a camera to see what is happening around itself. Now if someone comes toward the computer with a big hammer, it will be able to fear for its existence !

WHAT CAN MACHINES FEEL ?

In theory, any neural process can be reproduced digitally in a computer, even though the brain is mostly analog. This is hardly a concern, as Ray Kurzweil explained in his book How to Create a Mind. However it does not always make sense to try to replicate everything a human being feel in a machine.

While sensory feelings like heat, cold or pain could easily be felt from the environment if the machine is equipped with the appropriate sensors, this is not the case for other physiological feelings like thirst, hunger, and sleepiness. These feelings alert us of the state of our body and are normally triggered by hormones such as vasopressin, ghrelin, or melatonin. Since machines do not have a digestive system nor hormones, it would be downright nonsensical to try to emulate such feelings.

Emotions do not arise for no reason. They are either a reaction to an external stimulus, or a spontaneous expression of an internal thought process. For example, we can be happy or joyful because we received a present, got a promotion or won the lottery. These are external causes that trigger the emotions inside our brain. The same emotion can be achieved as the result of an internal thought process. If I manage to find a solution to a complicated mathematical problem, that could make me happy too, even if nobody asked me to solve it and it does not have any concrete application in my life. It is a purely intellectual problem with no external cause, but solving it confers satisfaction. The emotion could be said to have arisen spontaneously from an internalized thought process in the neocortex. In other words, solving the problem in the neocortex causes the emotion in another part of the brain.

An intelligent computer could also prompt some emotions based on its own thought processes, just like the joy or satisfaction experienced by solving a mathematical problem. In fact, as long as it is allowed to communicate with the outside world, there is no major obstacle to computers feeling true emotions of its own like joy, sadness, surprise, disappointment, fear, anger, or resentment, among others. These are all emotions that can be produced by interactions through language (e.g. reading, online chatting) with no need for physiological feedback.

Now let’s think about how and why humans experience a sense of well being and peace of mind, two emotions far more complex than joy or anger. Both occur when our physiological needs are met, when we are well fed, rested, feel safe, don’t feel sick, and are on the right track to pass on our genes and keep our offspring secure. These are compound emotions that require other basic emotions as well as physiological factors. A machine without physiological needs cannot get sick and that does not need to worry about passing on its genes to posterity, and therefore will have no reason to feel that complex emotion of ‘well being’ the way humans do. For a machine well being may exist but in a much more simplified form.

Just like machines cannot reasonably feel hunger because they do not eat, replicating emotions on machines with no biological body, no hormones, and no physiological needs can be tricky. This is the case with social emotions like attachment, sexual emotions like love, and emotions originating from evolutionary mechanisms set in the (epi)genome. This is what we will explore in more detail below.

FEELINGS ROOTED IN THE SENSES AND THE VAGUS NERVE

What really distinguishes intelligent machines from humans and animals is that the former do not have a biological body. This is essentially why they could not experience the same range of feelings and emotions as we do, since many of them inform us about the state of our biological body.

An intelligent robot with sensors could easily see, hear, detect smells, feel an object’s texture, shape and consistency, feel pleasure and pain, heat and cold, and the like. But what about the sense of taste ? Or the effects of alcohol on the mind ? Since machines do not eat, drink and digest, they wouldn’t be able to experience these things. A robot designed to socialize with humans would be unable to understand and share the feelings of gastronomical pleasure or inebriety with humans. They could have a theoretical knowledge of it, but not a first-hand knowledge from an actually felt experience.

But the biggest obstacle to simulating physical feelings in a machine comes from the vagus nerve, which controls such varied things as digestion, ‘gut feelings’, heart rate and sweating. When we are scared or disgusted, we feel it in our guts. When we are in love we feel butterflies in our stomach. That’s because of the way our nervous system is designed. Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.

Strong emotions can affect our microbiome, the trillions of gut bacteria that help us digest food and that secrete 90% of the serotonin and 50% of the dopamine used by our brain. The thousands of species of bacteria living in our intestines can vary quickly based on our diet, but it has been demonstrated that even emotions like stress, anxiety, depression and love can strongly affect the composition of our microbiome. This is very important because of the essential role that gut bacteria play in maintaining our brain functions. The relationship between gut and brain works both ways. The presence or absence of some gut bacteria has been linked to autism, obsessive-compulsive disorder and several other psychological conditions. What we eat actually influence the way the think too, by changing our gut flora, and therefore also the production of neurotransmitters. Even our intuition is linked to the vagus nerve, hence the expression ‘gut feeling’.

Without a digestive system, a vagus nerve and a microbiome, robots would miss a big part of our emotional and psychological experience. Our nutrition and microbiome influence our brain far more than most people suspect. They are one of the reasons why our emotions and behaviour are so variable over time (in addition to maturity; see below).

SICKNESS, FATIGUE, SLEEP AND DREAMS

Another key difference between machines and humans (or animals) is that our emotions and thoughts can be severely affected by our health, physical condition and fatigue. Irritability is often an expression of mental or physical exhaustion caused by a lack of sleep or nutrients, or by a situation that puts excessive stress on mental faculties and increases our need for sleep and nutrients. We could argue that computers may overheat if used too intensively, and may also need to rest. That is not entirely true if the hardware is properly designed with an super-efficient cooling system, and a steady power supply. New types of nanochips may not produce enough heat to have any heating problem at all.

Most importantly machines don’t feel sick. I don’t mean just being weakened by a disease or feeling pain, but actually feeling sick, such as indigestion, nausea (motion sickness, sea sickness), or feeling under the weather before tangible symptoms appear. These aren’t enviable feelings of course, but the point is that machines cannot experience them without a biological body and an immune system.

When tired or sick, not only do we need to rest to recover our mental faculties and stabilize our emotions, we also need to dream. Dreams are used to clear our short-term memory cache (in the hippocampus), to replete neurotransmitters, to consolidate memories (by myelinating synapses during REM sleep), and to let go of the day’s emotions by letting our neurons firing up freely. Dreams also allow a different kind of thinking free of cultural or professional taboos that increase our creativity. This is why we often come up with great ideas or solutions to our problems during our sleep, and notably during the lucid dreaming phase.

Computers cannot dream and wouldn’t need to because they aren’t biological brains with neurostransmitters, stressed out neurons and synapses that need to get myelinated. Without dreams, an AI would nevertheless loose an essential component of feeling like a biological human.

EMOTIONS ROOTED IN SEXUALITY

Being in love is an emotion that brings a male and a female individual (save for some exceptions) of the same species together in order to reproduce and raise one’s offspring until they grow up. Sexual love is caused by hormomes, but is not merely the product of hormonal changes in our brain. It involves changes in the biochemistry of our whole body and can even lead to important physiological effects (e.g. on morphology) and long-term behavioural changes. Clearly sexual love is not ‘just an emotion’ and is not purely a neurological process either. Replicating the neurological expression of love in an AI would simulate the whole emotion of love, but only one of its facets.

Apart from the issue of reproducing the physiological expresion of love in a machine, there is also the question of causation. There is a huge difference between an artificially implanted/simulated emotion and one that is capable of arising by itself from environmental causes. People can fall in love for a number of reasons, such as physical attraction and mental attraction (shared interests, values, tastes, etc.), but one of the most important in the animal world is genetic compatibility with the prospective mate. Individuals who possess very different immune systems (HLA genes), for instance, tend to be more strongly attracted to each other and feel more ‘chemistry’. We could imagine that a robot with a sense of beauty and values could appreciate the looks and morals of another robot or a human being and even feel attracted (platonically). Yet a machine couldn’t experience the ‘chemistry’ of sexual love because it lacks hormones, genes and other biochemical markers required for sexual reproduction. In other words, robots could have friends but not lovers, and that make sense.

A substantial part of the range of human emotions and behaviours is anchored in sexuality. Jealousy is another good example. Jealousy is intricatedly linked to love. It is the fear of losing one’s loved one to a sexual rival. It is an innate emotion whose only purpose is to maximize our chances of passing our genes through sexual reproduction by warding off competitors. Why would a machine, which does not need to reproduce sexually, need to feel that ?

One could wonder what difference it makes whether a robot can feel love or not. They don’t need to reproduce sexually, so who cares ? If we need intelligent robots to work with humans in society, for example by helping to take care of the young, the sick and the elderly, they could still function as social individuals without feeling sexual love, wouldn’t they ? In fact you may not want a humanoid robot to become a sexual predator, especially if working with kids ! Not so fast. Without a basic human emotion like love, an AI simply cannot think, plan, prioritize and behave the same way as humans do. Their way of thinking, planning and prioritizing would rely on completely different motivations. For example, young human adults spend considerable time and energy searching for a suitable mate in order to reproduce.

A robot endowed with an AI of equal or greater than human intelligence, lacking the need for sexual reproduction would behave, plan and prioritize its existence very differently than humans. That is not necessarily a bad thing, for a lot of conflicts in human society are caused by sex. But it also means that it could become harder for humans to predict the behaviour and motivation of autonomous robots, which could be a problem once they become more intelligent than us in a few decades. The bottom line is that by lacking just one essential human emotion (let alone many), intelligent robots could have very divergent behaviours, priorities and morals from humans. It could be different in a good way, but we can’t know that for sure at present since they haven’t been built yet.

TEMPERAMENT AND SOCIABILITY

Humans are social animals. They typically, though not always (e.g. some types of autism), seek to belong to a group, make friends, share feelings and experiences with others, gossip, seek approval or respect from others, and so on. Interestingly, a person’s sociability depends on a variety of factors not found in machines, including gender, age, level of confidence, health, well being, genetic predispositions, and hormonal variations.

We could program an AI to mimick a certain type of human sociability, but it wouldn’t naturally evolve over time with experience and environmental factors (food, heat, diseases, endocrine disruptors, microbiome). Knowledge can be learned but not spontaneous reactions to environmental factors.

Humans tend to be more sociable when the weather is hot and sunny, when they drink alcohol and when they are in good health. A machine has no need to react like that, unless once again we intentionally program it to resemble humans. But even then it couldn’t feel everything we feel as it doesn’t eat, doesn’t have gut bacteria, doesn’t get sick, and doesn’t have sex.

MATERNAL WARMTH AND FEELING OF SAFETY IN MAMMALS

Humans, like all mammals, have an innate need for maternal warmth in childhood. An experiment was conducted with newborn mice taken away from their biological mother. The mice were placed in a cage with two dummy mothers. One of them was warm, fluffy and cosy, but did not have milk. The other one was hard, cold and uncosy but provided milk. The baby mice consistently chose the cosy one, demonstrating that the need for comfort and safety trumps nutrition in infant mammals. Likewise, humans deprived of maternal (or paternal) warmth and care as babies almost always experience psychological problems growing up.

In addition to childhood care, humans also need the feeling of safety and cosiness provided by the shelter of one’s home throughout life. Not all animals are like that. Even as hunter-gatherers or pastoralist nomads, all Homo sapiens need a shelter, be it a tent, a hut or a cave.

How could we expect that kind of reaction and behaviour in a machine that does not need to grow from babyhood to adulthood, cannot know what it is to have parents or siblings, nor need to feel reassured by maternal warmth, and do not have a biological compulsion to seek a shelter ? Without those feelings, it is extremely doubtful that a machine could ever truly understand and empathize completely with humans.

These limitations mean that it may be useless to try to create intelligent, sentient and self-aware robots that truly think, feel and behave like humans. Reproducing our intellect, language, and senses (except taste) are the easy part. Then comes consciousness, which is harder but still feasible. But since our emotions and feelings are so deeply rooted in our biological body and its interaction with its environment, the only way to reproduce them would be to reproduce a biological body for the AI. In other words, we are not talking about a creating a machine anymore, but genetically engineering a new life being, or using neural implants for existing humans.

MACHINES DON’T MATURE

The way human experience emotions evolves dramatically from birth to adulthood. Children are typically hyperactive and excitable and are prone to making rash decisions on impulse. They cry easily and have difficulties containing and controlling their emotions and feelings. As we mature, we learn more or les successfully to master our emotions. Actually controlling one’s emotions gets easier over time because with age the number of neurons in the brain decreases and emotions get blunter and vital impulses weaker.

The expression of one’s emotions is heavily regulated by culture and taboos. That’s why speakers of Romance languages will generally express their feelings and affection more freely than, say, Japanese or Finnish people. Would intelligent robots also follow one specific human culture, or create a culture on their own ?

Sex hormones also influence the way we feel and express emotions. Male testosterone makes people less prone to emotional display, more rational and cold, but also more aggressive. Female estrogens increase empathy, affection and maternal instincts of protection and care. A good example of the role of biology on emotions is the way women’s hormonal cycles (and the resulting menstruations) affect their emotions. One of the reasons that children process emotions differently than adults is that have lower sex hormomes. As people age, hormonal levels decrease (not just sex hormones), making us more mellow.

Machines don’t mature emotionally, do not go through puberty, do not have hormonal cycles, nor undergo hormonal change based on their age, diet and environment. Artificial intelligence could learn from experience and mature intellectually, but not mature emotionally like a child becoming an adult. This is a vital difference that shouldn’t be underestimated. Program an AI to have the emotional maturity of a 5-year old and it will never grow up. Children (especially boys) cannot really understand the reason for their parents’ anxiety toward them until they grow up and have children of their own, because they lack the maturity and sexual hormones associated with parenthood.

We could always run a software emulating changes in AI maturity over time, but they would not be the result of experiences and interactions with the environment. It may not be useful to create robots that mature like us, but the argument debated here is whether machines could ever feel exactly like us or not. This argument is not purely rhetorical. Some transhumanists wish to be able one day to upload their mind onto a computer and transfer our consciouness (which may not be possible for a number of reasons). Assuming that it becomes possible, what if a child or teenager decides to upload his or her mind and lead a new robotic existence ? One obvious problem is that this person would never fulfill his/her potential for emotional maturity.

The loss of our biological body would also deprive us of our capacity to experience feelings and emotions bound to our physiology. We may be able to keep those already stored in our memory, but we may never dream, enjoy food, or fall in love again.

SUMMARY & CONCLUSION

What emotions could machines experience ?

Even though many human emotions are beyond the range of machines due to their non-biological nature, some emotions could very well be felt by an artificial intelligence. These include, among others:

  • Joy, satisfaction, contentment
  • Disappointment, sadness
  • Surprise
  • Fear, anger, resentment
  • Friendship
  • Appreciation for beauty, art, values, morals, etc.

What emotions and feelings would machines not be able to experience ?

The following emotions and feelings could not be wholly or faithfully experienced by an AI, even with a sensing robotic body, beyond mere implanted simulation.

  • Hunger, thirst, drunkenness, gastronomical enjoyment
  • Various feelings of sickness, such as nausea, indigestion, motion sickness, sea sickness, etc.
  • Sexual love, attachment, jealousy
  • Maternal/paternal instincts towards one’s own offspring
  • Fatigue, sleepiness, irritability
  • Dreams and associated creativity

In addition, machine emotions would run up against the following issues that would prevent them to feel and experience the world truly like humans.

  • Machines wouldn’t mature emotionally with age.
  • Machines don’t grow up and don’t go through puberty to pass from a relatively asexual childhood stage to a sexual adult stage
  • Machines cannot fall in love (+ associated emotions, behaviours and motivations) as they aren’t sexual beings
  • Being asexual, machines are genderless and therefore lack associated behaviour and emotions caused by male and female hormones.
  • Machines wouldn’t experience gut feelings (fear, love, intuition).
  • Machine emotions, intellect, psychology and sociability couldn’t vary with nutrition and microbiome, hormonal changes, or environmental factors like the weather.

It is not completely impossible to bypass these obstacles, but that would require to create a humanoid machine that not only possess human-like intellectual faculties, but also an artificial body that can eat and digest and with a digestive system connected to the central microprocessor in the same way as our vagus nerve is connected to our brain. That robot would also need a gender and a capacity to have sex and feel attracted to other humanoid robots or humans based on a predefined programming that serves as an alternative to a biological genome to create a sense of ‘sexual chemistry’ when matched with an individual with a compatible “genome”. It would necessitate artificial hormones to regulate its hunger, thirst, sexual appetite, homeostasis, and so on.

Although we lack the technology and in-depth knowledge of the human body to consider such an ambitious project any time soon, it could eventually become possible one day. One could wonder whether such a magnificent machine could still be called a machine, or simply an artificially made life being. I personally don’t think it should be called a machine at that point.

———

This article was originally published on Life 2.0.

#Exclusive: @HJBentham @ClubOfINFO responds to @Hetero_Sapien @IEET
After the reprint at the ClubOfINFO webzine of Franco Cortese’s excellent IEET (Institute for Ethics and Emerging Technologies) article about how advanced technology clashes with the Second Amendment of the US Constitution, I am interested enough that I have decided to put together this response. Changes in technology do eventually force changes in the law, and some laws ultimately have to be scrapped. However there is an argument to be made that the Second Amendment’s deterrent against tyranny should not be dismissed too easily.
Franco points out that the Second Amendment’s “most prominent justification” is that citizens require a form of self-defense against a potentially corrupt government. In such a case, they may need to take back the state by force through a “citizen militia”.

Technology and “stateness”

The argument given by Franco against the idea of citizens engaging their government in battle leads to a conclusion that “technological growth has made the Second Amendment redundant”. Arms in the Eighteenth Century were “roughly equal” for the citizenry and the military. According to Franco’s article, “in 1791, the only thing that distinguished the defensive or offensive capability of military from citizenry was quantity. Now it’s quality.”
I believe the above point about the state monopoly on force going from being based on quantity to quality can be disputed. The analysis from Franco seems to be that the norms of warfare and the internal effectiveness of state power are set by the level technology available to the state. Although there is of course a strong technological element involved in these manifestations of state power, it is more accurate to say “stateness” – which military power is only the international reflection of – is due to a combination of having more legitimacy, resources and organization. The effectiveness of this kind of “stateness”, including the ability of the most powerful states to overcome challenges of internecine warfare, has not changed very decisively since the Nineteenth Century.
In fact, stateness is said by many analysts to have declined worldwide since the fall of the Berlin Wall. Since that event and the subsequent dissolution of the USSR, the number of states facing internal crisis seems to have only risen, which suggests stateness is being weakened globally due to many complex pressures. Advanced technology is itself even credited with eroding stateness, as transport and the Internet only give citizens ever more abilities to get around, provoke, rebel and ultimately erode the strength and legitimacy of the state. In most arenas of social change, states face unprecedented challenges from their own citizens because of the unexpected changes in advanced technology that have taken place over the last few decades. Concerning the future of this trend, Franco aptly anticipates in his article that “post-scarcity” technologies would make things even more uncomfortable for the state, pushing it to rely on secrecy and suppression of knowledge to avoid proliferation of devastating weapons.
Much of this commentary on the loss of stateness may seem irrelevant to the right to bear arms in the United States, but it is relevant for reasons that will become clear in this article. We cannot say that the US government has a true monopoly on force due to its technology, and that the potential of a citizen uprising is gone. We have seen too many other “modern” states such as Yugoslavia, Somalia, Lebanon, Libya, Syria and Mali quickly deteriorate into full scale civil war just because groups of determined citizens took up light weapons (many of those rebels have far less skill and technology at their disposal than the average US gun owner).

Internecine warfare in the United States

From what we have seen of civil war in other countries, we cannot know that simple rifles and handguns really are a useless path of resistance against a modern state tyranny, just because the tyrants will have more lethal options such as cluster bombs and nerve gas. Even the most crudely armed insurrectionists are capable of overthrowing their governments, if they are determined and numerous enough. Having a lightly armed population from the outset, like the US population, only makes it more likely that such a war against tyranny would be ubiquitous and likely to succeed swiftly from the outset.
If we do take the unlikely position of supposing that the United States will degenerate into a true tyranny in the Aristotelian definition, then US citizens certainly need their right to bear arms. More than that, their path of armed resistance using those light weapons could still realistically win. If their cause was just, we can suppose that they would be battling in self-defense against a tyrannical regime that has plummeting legitimacy, or is buying time for contingents of the military to break off and join the rebellion. In such a situation, the sheer number of citizens taking up arms would do more than just demoralize government troops and lead to indecision among them.
The fact of a generally well-armed population would, if they took up arms against their regime, guarantee the existence of a widespread insurgency to such an extent that the rulers would face many years of internecine resistance and live under the constant specter of assassination. Add the internal economic devastation caused by citizens committing acts of sabotage and civil disobedience, foreign sanctions by other states, and even international aid to the insurgents by external actors, and the tyrants could be ousted even by the most lightly armed militia units.
Explaining the imbalance that has prevailed between the military might of states and the internal ability of citizens to resist their ruling regimes with arms, Franco notes that the “overwhelming majority of new technological advances are able to be leveraged by the military before they trickle down to the average citizen through industry.” This is certainly true. However, the summation that resistance is futile would not take into account the treacherous opportunities that exist in every internecine war.
When the state projects force internally, it prefers to call that “law enforcement” for as long as it remains in control of the situation. Even if the violence gets more widespread and becomes civil war, the state denies such a fact until the very last moment. Even then, it prefers to minimize the damage on its own territory, because the damage would ultimately have to be repaired and paid for by the state itself. Even in a civil war situation, the technology brought to bear against citizens by the government would never be as heavy or destructive as the kind of equipment brought to bear against foreign states or non-state actors. This is for the simple reason that the state, in a civil war, has to try to avoid obliterating its own constituents and infrastructure for political reasons. If it is caught committing such a desperate and disproportionate act, it will only undermine itself and give a propaganda coup to its lightly equipped opponents by committing a heavy-handed atrocity.
The imbalance of the superior technology of the United States government in contrast to the basic handguns and rifles of its citizenry is real, but it would have zero significance if a real internecine war took place in the United States. The deadliest weapons in the arsenal of the United States, such as nuclear or biological weapons, would never be used to confront internecine threats, so they are not relevant enough to enter the debate on the Second Amendment.
The concept of taking back government via a citizen militia is not about defeating a whole nation in the conventional sense through raw military strength, but rather about a multifaceted political struggle in which the nation is able to confront and defeat the ruling regime via some form of internecine combat. The US would tend to prefer handling militant and “terrorist” adversaries on its own territory with the bare minimum of heavy equipment and ordnance at all times. Given this, the real technological contest would only be between opposing marksmen and their rifles (any advanced firearms would soon be seized by guerrillas and used back against the state). No ridiculously unbalanced battle with tanks, nukes and generals on one side and “simple folks” with shotguns on the other side would take place. In most civil wars, the use of tanks and warplanes (never mind nukes) only tends to make matters worse for the ruling government by hitting bystanders and further alienating the people on the ground. The US military leadership should know this better than anyone else, having condemned regime after regime for making that same mistake of heavy-handed escalation.
Anti-tyranny insurgency using only light (and easily hidden) armaments is as viable in 2014 as it was in the Eighteenth Century, and has proven sufficient to delegitimize and ultimately remove brutal regimes from power. Any sufficiently unpopular regime can be delegitimized and removed from power by the armed resistance of lightly-equipped militia forces.
Franco’s conclusion that the US should neither extend the Second Amendment to cover giving everyone access to ridiculously devastating weapons, nor scrap the Second Amendment altogether, is wise and relevant to helping US society make some difficult decisions. Law (and by extension stateness) is “uncertain in the face of technologies’ upward growth.” States that want to remain popular should try to be as adaptive as possible to new (and old) technologies and ideas, and not be swayed by any single narrow-minded idea or program for society. If the American people distrust their system of government enough to keep their right to bear arms, for fear of tyranny, then the Second Amendment ought to remain.

By Harry J. BenthamMore articles by Harry J. Bentham

This article originally appeared at the techno-politics magazine, ClubOfINFO

The technological singularity requires the creation of an artificial superintelligence (ASI). But does that ASI need to be modelled on the human brain, or is it even necessary to be able to fully replicate the human brain and consciousness digitally in order to design an ASI ?

Animal brains and computers don’t work the same way. Brains are massively parallel three-dimensional networks, while computers still process information in a very linear fashion, although millions of times faster than brains. Microprocessors can perform amazing calculations, far exceeding the speed and efficiency of the human brain using completely different patterns to process information. The drawback is that traditional chips are not good at processing massively parallel data, solving complex problems, or recognizing patterns.

Newly developed neuromorphic chips are modelling the massively parallel way the brain processes information using, among others, neural networks. Neuromorphic computers should ideally use optical technology, which can potentially process trillions of simultaneous calculations, making it possible to simulate a whole human brain.

The Blue Brain Project and the Human Brain Project, funded by the European Union, the Swiss government and IBM, are two such attempts to build a full computer model of a functioning human brain using a biologically realistic model of neurons. The Human Brain Project aims to achieve a functional simulation of the human brain for 2016.

Neuromorphic chips make it possible for computers to process sensory data, detect and predict patterns, and learn from experience. This is a huge advance in artificial intelligence, a step closer to creating an artificial general intelligence (AGI), i.e. an AI that could successfully perform any intellectual task that a human being can.

Think of an AGI inside a humanoid robot, a machine that looks and behave like us, but with customizable skills and that can perform practically any task better than a real human. These robots could be self-aware and/or sentient, depending on how we choose to build them. Manufacturing robots wouldn’t need to be, but what about social robots living with us, taking care of the young, the sick or the elderly? Surely it would be nicer if they could converse with us as if they were conscious, sentient beings like us, a bit like the AI in Spike Jonze’s 2013 movie Her.

In a not too distant future, perhaps less than two decades, such robots could replace humans for practically any job, creating a society of abundance where humans can spend their time however they like. In this model, highly capable robots would run the economy for us. Food, energy and most consumer products would be free or very cheap, and people would receive a fixed monthly allowance from the government.

This all sounds very nice. But what about an AI that would be greatly surpass the brightest human minds ? An artificial superintelligence (ASI), or strong AI (SAI), with the ability to learn and improve on itself, and potentially becoming millions or billions of times more intelligent and capable than humans ? The creation of such an entity would theoretically lead to the mythical technological singularity.

Futurist and inventor Ray Kurzweil believes that the singularity will happen some time around 2045. Among Kurzweil’s critics is Microsoft cofounder Paul Allen, who believes that the singularity is still a long way off. Allen argues that for a real singularity-level computer intelligence to be built, the scientific understanding of how the human brain works will need to accelerate exponentially (like digital technologies), and that the process of original scientific discovery just doesn’t behave that way. He calls this issue the complexity brake.

Without interfering in the argument between Paul Allen and Ray Kurzweil (who replied convincingly here), the question I want to discuss is whether it is absolutely necessary to fully understand and replicate the way the human brain works to create an ASI.

GREAT INTELLIGENCE DOESN’T HAVE TO BE MODELLED ON THE HUMAN BRAIN

It is a natural for us to think that humans are the culmination of intelligence, simply because it is the case in the biological world on Earth. But that doesn’t mean that our brain is perfect or that other forms of higher intelligence cannot exist if they aren’t based on the same model.

If extraterrestrial beings with a greater intelligence than ours exist, it is virtually unthinkable that their brains be shaped and function like ours. The process of evolution is so random and complex that even if life were to be created again on a planet identical to Earth, it wouldn’t unfold the same way as it did for us, and consequently the species wouldn’t be the same. What if the Permian-Triassic extinction, or any other mass extinction event hadn’t occured ? We wouldn’t be there. But that doesn’t mean that other intelligent animals wouldn’t have evolved instead of us. Perhaps there would have been octopus-like creatures more intelligent than humans with a completely different brain structure.

It’s pure human vanity and short-sightedness to think that everything good and intelligent has to be modelled on us. That is the kind of thinking that led to the development of religions with anthropomorphized gods. Humble or unpretentious religions like animism or Buddhism either have no human-like deity or no god at all. More arrogant or self-righteous religions, be them polytheistic or monotheistic, have typically imagined gods as superhumans. We don’t want to make the same mistake with artificial superintelligence. Greater than human intelligence does not have to be an inflated version of human intelligence, nor should it be based on our biological brains.

The human brain is the fortuitious result of four billion years of evolution. Or rather, it is one tiny branch in the grand tree of evolution. Birds have much smaller brains than mammals and are generally considered stupid animals compared to most mammals. Yet, crows have reasoning skills that can exceed that of a preschooler. They display conscious, purposeful behaviour, a combined with a sense of initiative, elaborate problem solving abilities of their own, and can even use tools. All this with a brain the size of a fava-bean. A 2004 study from the departments of animal behavior and experimental psychology at the University of Cambridge claimed that crows were as clever as the great apes.

Clearly there is no need to replicate the intricacies of a human cortex to achieve consciousness and initiative. Intelligence does not depend only on brain size, the number of neurons, or cortex complexity, but also the brain-to-body mass ratio. That is why cattle, who have brains as big as chimpanzees, are stupider than ravens or mice.

But what about computers ? Computers are pure “brains”. They don’t have bodies. And indeed as computers get faster and more efficient, their size tend to decrease, not increase. This is yet another example of why we shouldn’t compare biological brains and computers.

As Ray Kurzweil explains in his reply to Paul Allen, learning about how the human brains works only serve to provide “biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. […] The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience.” He then adds that IBM’s Watson learned most of its knowledge by reading on its own.

In conclusion, there is no rational reason to believe that an artificial superintelligence couldn’t come into being without being entirely modelled on the human brain, or any animal brain. A computer chip will never be the same as a biochemical neural network, and a machine will never feel emotions the same way as us (although they may feel emotions that are out of the range of human perception). But notwithstanding these differences, some computers can already acquire knowledge on their own, and will become increasingly good at it, even if they don’t learn exactly the same way as humans. Once given the chance to improve on themselves, intelligent machines could set in motion a non-biological evolution leading to greater than human intelligence, and eventually to the singularity.

————–

This article was originally published on Life 2.0.

- @ClubOfINFO — A recent massive leap forward in synthetic life, recently published in Nature, is the expansion of the alphabet of DNA to six letters rather than four, by synthetic biologists – the technicians to whom we entrust the great task of reprogramming life itself.

Breakthroughs such as the above are quite certain to alert more and more people to synthetic biology and its possible consequences. For as long as such breathtaking discoveries continue to be made in this area of research, it is inevitable that latent fears among society will come closer to the surface.
There is likely to be a profound distrust, whether inculcated by religion or by science fiction horror movies and literature, towards the concept of tampering with nature and especially the very building blocks that brought us into existence. While the people with this profoundly negative reaction are not sure what they are warning against, they are motivated by a vitalistic need to believe that the perversion of life is going to provoke hidden – almost divine – repercussions.
Is it really true that no-one should be meddling with something so fundamental to life, or is synthetic biology the science of our century, our civilization’s key to unlimited energy? Whatever the answer may be, the science enabling it already exists and is growing rapidly, and history seems to show that any technology once invented is impossible to contain.
The fact that synthetic base pairs now exist should confirm, for many, the beginning of humanity’s re-engineering of the structures of life itself. As it is unprecedented in our evolution, we are presented with an ethical question and all points of view should be considered, no matter how radical or conservative they are.
It is hard to find a strong display of enthusiasm for the use of synthetic biology as a solution to the world’s greatest problems, even among the transhumanists and techno-progressives. Most of the popular enthusiasm for technological change, particularly the radical improvement of life and the environment through technology, focuses on artificial intelligence, nanotechnology, and things like solar cells as the solution to energy crises. There is not much of a popular case being made for synthetic biology as one of the keys to civilization’s salvation and humanity’s long-term survival, but there should be. The first obstacles to such a case are most likely fear and prejudice.
Even among those theorists who offer the most compelling arguments about self-sustaining technologies and their potential to democratize and change the means of production, enthusiasm for synthetic biology is purposely withheld. Yannick Rumpala’s paper Additive manufacturing as global remanufacturing of politics has a title that speaks for itself. It sees in 3d printing the potential to exorcize some of the most oppressive structural inevitabilities of the current division of labor, transforming economics and politics to be more network-based and egalitarian. When I suggested to Yannick that synthetic organisms – the most obvious choices of technology that will be able to self-replicate and become universally available at every stratum of global society – he was reserved. This was half due to not having reflected on biotechnology’s democratic possibilities, and half due to a principled rejection of “artificial environments”.
Should synthetic biology make people nervous rather than excited, and should be it be rejected as controversial and potentially dangerous rather than embraced as a potentially world-changing and highly democratic technology? The second tendency that results in a rejection of synthetic biology by those who normally go about endorsing technology as the catalyst for social change is the tendency to point to a very specific threat – a humanity-threatening virus.
This second rejection of synthetic biology is easier to respond to than the first, because it is very specific. In fact, the threat is discussed in sufficient depth by synthetic biology’s own leading scientist himself, J. Craig Venter, in his 2013 book Life at the Speed of Light. In anticipation of a viral threat, “bio-terror” is considered the top danger by the US government, but “bio-error” is seen by Venter as an even bigger danger. There is a possibility of individual accidents using synthetic biology, analogous to medical accidents from overdoses. It could involve a virus introduced as a treatment for cancer becoming dangerous (like in the movie, I Am Legend). This is especially possible, if the technology becomes ubiquitous and “DIY”, with individuals customizing their own treatments by synthesizing viruses. However, many household materials and technologies already present the same level of threat to lone individuals, so there is no reason to focus on the popular use of synthetic biology as an extraordinary threat.
A larger scale disaster is far easier to prevent than the death or illness of a lone individual from his own synthetic biology accident. A bio-terror attack, Venter writes, would be extremely difficult using synthetic biology. Synthetic biology is going to give medical professionals the ability to quickly sequence genomes and transmit them on the airwaves to synthesize new vaccines. This would only make it easier to fight against bioterror or a potentially apocalyptic virus, as the threat could be found and sequenced by computers, with the cure being synthesized and introduced almost immediately. Despite this fact that synthetic biology provides the best defense against its own possible threats, it is still important to be balanced in our recognition of the benefits and threats of this technology.
More dangerous than a virus breaking loose from the lab, Venter recognizes the potential for the abuse of synthetic biology by hostile governments. Of most concern, custom viruses could be used as assassins against individuals, whether by governments or conspirators. A cold could be created to have no effect on most people, but be deadly to the President of the United States. All you would need to do is get access to a sample of the President’s genetic material, sequence it, and develop a corresponding virus that exploits a unique weakness in his/her DNA. This danger in particular seems to be more worthy of concern than an apocalyptic virus or devastating bioterrorist attack striking the whole of humanity.
The ethical burden on those who work with synthetic life, as Venter takes from a US government bioethics study, requires “a balance between the pessimistic view of these efforts as yet another example of hubris and the optimistic view of their being tantamount to “human progress” ”. Synthetic biologists must be “good stewards”, and must “move genomic research forward with caution, armed with insights from value traditions with respect to the proper purposes and uses of knowledge.”
However, there is also an undeniable reason to embrace synthetic biology as a solution to many of the world’s most urgent problems. J. Craig Venter’s own words confirm that synthetic life deserves to be included in Yannick Rumpala’s analysis, as a democratic technology that can transform global politics and economics and counter disparity in the world:

“Creating life at the speed of light is part of a new industrial revolution that will see manufacturing shift away from the centralized factories of the past to a distributed, domestic manufacturing future, thanks to 3-d printers.”

There may be a terrible threat from synthetic biology, but it will not necessarily be bio-error or bio-terror. The abuse could come from none other than a very familiar leviathan that has already violated the trust of its citizens before: the supposedly incorruptible United States government. Already, there is an interest in sequencing everyone’s genomes and placing them on a massive database, ostensibly for medical purposes. One cannot help but connect this with the US government’s fascination with tracking and monitoring its own citizens. If the ability to customize a virus to target an individual is true, the killer state will almost certainly maintain the military option of synthetic biology on the table – a possible way of carrying out “targeted killings” around the world in a more sophisticated and secretive manner than ever before.
The threats of synthetic biology are elusive and verge on being conspiracy theories or overused movie plots, but the magnificent potential of synthetic biology to eliminate inequality and suffering in the world is clear and present. In fact, the greatest bio-disaster in the history of the world may be humanity’s reluctance to remanufacture life in order to make more efficient use of the world’s declining natural resources. At the same time, the belief that ubiquitous synthetic biology will threaten life is secondary and distracting, as the true responsibility for unjustly threatening life is likely to always be with the state.

By Harry J. BenthamMore articles by Harry J. Bentham

Originally published on 13 May 2014 at the Institute for Ethics and Emerging Technologies (IEET)

Today’s emerging technologies will be tomorrow’s liberators. Subscribe for similar articles.

— Popular Science

It happens quickly—more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.

Read more

Although I have already mentioned a recent technical note on the application of Astronomical Observation to LHC/Collider Safety in comments to other posts here and there, I have not posted specifically about it until now. So finally, a short mention:

The technical note follows on from a modest paper I wrote in 2012 (Discussions on the Hypothesis that Cosmic Ray Exposure on Sirius B Negates Terrestrial MBH Concerns from Colliders), which concerned micro-black hole (MBH) production and the white dwarf safety assurance. There I demonstrated that not only are most white dwarf stars not suitable as a safety assurance, but that those hand-picked for the 2008 safety report had magnetic field strength measured to just 99% confidence within the range for safety assurance. That is not to say that the LHC safety argument was only 99% reliable — just that one of the cornerstone assurances was. The affirmation of these measurements was needed for a safety assurance to LHC p-p collisions based on astronomical observations – as a safety assurance that is not based on Hawking Radiation theory — but based on verifiable measurement. The technical note captures the official LSAG (CERN) response on the matter after internal review at CERN in late 2012, which had remained archived from email discussions until recently, when those conclusions were formalised into this technical note:

Link to the technical note: http://environmental-safety.webs.com/TechnicalNote-EnvSA01.pdf

mostly harmless

That conclusion was fortunately, as expected, one of safety: significant progress had been made on the accuracy of B field measurement technology since the original 2008 safety report — and after a survey of latest literature, one finds that there are now extensive examples of WD with fields measured with uncertainty ranges within the 1–100 kG range required for assurance. However — despite an eventual conclusion of safety on this one matter (MBH concerns from p-p collisions) I would like to reiterate a point that I made back in 2008, that there is an obligation on industry to keep safety debate open and honest. We are not likely to see credible argument on any of the other concerns to LHC operations (strangelet production, magnetic monopoles, de sitter space transitions and vacuum bubbles, and so on), but these discussions do illustrate that re-visitations can be necessary.

Whilst onwards we strive to find new understandings to the universe, and to engineer new ways of being, we need to stand back and take a look at where we are, lest we get lost.

White Swan’s Pandora Versus Cassandra Predictions! By Mr. Andres Agostini at https://lifeboat.com/blog/2014/04/white-swan

WHITE

Cassandra: What is going to happen in the World as per the Euro-Asian superpower?

Pandora: First, we have Cold War II and a Preconditions of a Global War of Trade and Commerce in place. Second: Let us hope that switches to ascertain M.A.D. are never turned on.

Cassandra: What is going to happen in Southern Europe’s Public Health-care and Retirement Systems?

Pandora: Those safety nets will be somewhere between insolvency and meagerness and totally downed. And citizens either become inventors and find their own solutions or bestow upon them self-inflicted death sentences.

Cassandra: What is going to happen by 2013?

Pandora: Bots will have human-intelligence level of themselves. And they will be competing for jobs and professional contract services against un-enhanced humans.

Cassandra: What are Ministers of Defense and Intelligentsia Agencies are going to do?

Pandora: They will secure to increase budgets for scientists to bring about extreme bots to make the human soldier a thing of the past.

Cassandra: What is going to happen to major democracies soon?

Pandora: Well, most of them are Plutocracies already. But in pursuing a more strict control of the citizenry they will become Stratocracies, also ruled by Aristocracies and Technocracies.

Cassandra: What is going to happen to the Superrich 1%?

Pandora: The 1% is going to get infinitely more in the zillions. And the 99% is going to become more indignant and chaotic.

Cassandra: What is going to be brought about by techno-snoopying?

Pandora: Police states, all over the place.

Cassandra: Who are going to counter measure economic, political and military dominance in the Pacific Ocean?

Pandora: China and Russia.

Cassandra: Where can I get the whole predictions?

Pandora: Go and read the White Swan at https://lifeboat.com/blog/2014/04/white-swan

Mr. Andres Agostini
Chief Polymath Officer (CPO)
The Worldwide Ambassador at the Lifeboat Foundation at https://lifeboat.com/ex/bios.andres.agostini
POINT OF CONTACT AND QUERY: www.linkedin.com/in/andresagostini
PROFESSIONAL SERVICE: http://ThisSuccess.wordpress.com

AS A CONSULTANT, MANAGER, STRATEGIST AND RESEARCHER, ANDRES WORKS AND HAS WORKED WITH INSTITUTIONS — AND THE RESPECTIVE EXECUTIVES OF SAID ORGANIZATIONS — INCLUDING THOSE ONES SUCH AS:

► Toyota,
► Mitsubishi,
► World Bank,
► Shell,
► Statoil,
► Total,
► Exxon,
► Mobil,
► PDVSA, Citgo,
► GE,
► GMAC,
► TNT Express,
► AT&T
► GTE,
► Amoco,
► BP,
► Abbot Laboratories,
► World Health Organization,
► Ernst Young Consulting,
► SAIC (Science Applications International Corporation),
► Pak Mail,
► Wilpro Energy Services,
► Phillips Petroleum Company,
► Dupont,
► Conoco,
► ENI (Italy’s petroleum state-owned firm),
► Chevron,
► LDG Management (HCC Benefits).
► Liberty Mutual (via its own Seguros Caracas)
► MAPFRE (via its own Seguros La Seguridad)
► AES Corporation (via its own Electricidad de Caracas)
► Lafarge
► The University of Arkansas at Little Rock’s Most Honorable and Respected Professor Dr. Daniel Berleant, PhD.

Mr. Andres Agostini

Chief Polymath Officer (CPO)
The Worldwide Ambassador at the Lifeboat Foundation at https://lifeboat.com/ex/bios.andres.agostini
POINT OF CONTACT AND QUERY: www.linkedin.com/in/andresagostini
PROFESSIONAL SERVICE: http://ThisSuccess.wordpress.com

AS A CONSULTANT, MANAGER, STRATEGIST AND RESEARCHER, ANDRES WORKS AND HAS WORKED WITH INSTITUTIONS — AND THE RESPECTIVE EXECUTIVES OF SAID ORGANIZATIONS — INCLUDING THOSE ONES SUCH AS:

► Toyota,
► Mitsubishi,
► World Bank,
► Shell,
► Statoil,
► Total,
► Exxon,
► Mobil,
► PDVSA, Citgo,
► GE,
► GMAC,
► TNT Express,
► AT&T
► GTE,
► Amoco,
► BP,
► Abbot Laboratories,
► World Health Organization,
► Ernst Young Consulting,
► SAIC (Science Applications International Corporation),
► Pak Mail,
► Wilpro Energy Services,
► Phillips Petroleum Company,
► Dupont,
► Conoco,
► ENI (Italy’s petroleum state-owned firm),
► Chevron,
► LDG Management (HCC Benefits).
► Liberty Mutual (via its own Seguros Caracas)
► MAPFRE (via its own Seguros La Seguridad)
► AES Corporation (via its own Electricidad de Caracas)
► Lafarge
► The University of Arkansas at Little Rock’s Most Honorable and Respected Professor Dr. Daniel Berleant, PhD.

Book Review: The Human Race to the Future by Daniel Berleant (2013) (A Lifeboat Foundation publication)

Posted in alien life, asteroid/comet impacts, biotech/medical, business, climatology, disruptive technology, driverless cars, drones, economics, education, energy, engineering, ethics, evolution, existential risks, food, futurism, genetics, government, habitats, hardware, health, homo sapiens, human trajectories, information science, innovation, life extension, lifeboat, nanotechnology, neuroscience, nuclear weapons, philosophy, policy, posthumanism, robotics/AI, science, scientific freedom, security, singularity, space, space travel, sustainability, transhumanismTagged , , , , , , | Leave a Comment on Book Review: The Human Race to the Future by Daniel Berleant (2013) (A Lifeboat Foundation publication)

From CLUBOF.INFO

The Human Race to the Future (2014 Edition) is the scientific Lifeboat Foundation think tank’s publication first made available in 2013, covering a number of dilemmas fundamental to the human future and of great interest to all readers. Daniel Berleant’s approach to popularizing science is more entertaining than a lot of other science writers, and this book contains many surprises and useful knowledge.

Some of the science covered in The Human Race to the Future, such as future ice ages and predictions of where natural evolution will take us next, is not immediately relevant in our lives and politics, but it is still presented to make fascinating reading. The rest of the science in the book is very linked to society’s immediate future, and deserves great consideration by commentators, activists and policymakers because it is only going to get more important as the world moves forward.

The book makes many warnings and calls for caution, but also makes an optimistic forecast about how society might look in the future. For example, It is “economically possible” to have a society where all the basics are free and all work is essentially optional (a way for people to turn their hobbies into a way of earning more possessions) (p. 6–7).

A transhumanist possibility of interest in The Human Race to the Future is the change in how people communicate, including closing the gap between thought and action to create instruments (maybe even mechanical bodies) that respond to thought alone. The world may be projected to move away from keyboards and touchscreens towards mind-reading interfaces (p. 13–18). This would be necessary for people suffering from physical disabilities, and for soldiers in the arms race to improve response times in lethal situations.

To critique the above point made in the book, it is likely that drone operators and power-armor wearers in future armies would be very keen to link their brains directly to their hardware, and the emerging mind-reading technology would make it possible. However, there is reason to doubt the possibility of effective teamwork while relying on such interfaces. Verbal or visual interfaces are actually more attuned to people as a social animal, letting us hear or see our colleagues’ thoughts and review their actions as they happen, which allows for better teamwork. A soldier, for example, may be happy with his own improved reaction times when controlling equipment directly with his brain, but his fellow soldiers and officers may only be irritated by the lack of an intermediate phase to see his intent and rescind his actions before he completes them. Some helicopter and vehicle accidents are averted only by one crewman seeing another’s error, and correcting him in time. If vehicles were controlled by mind-reading, these errors would increasingly start to become fatal.

Reading and research is also an area that could develop in a radical new direction unlike anything before in the history of communication. The Human Race to the Future speculates that beyond articles as they exist now (e.g. Wikipedia articles) there could be custom-generated articles specific to the user’s research goal or browsing. One’s own query could shape the layout and content of each article, as it is generated. This way, reams of irrelevant information will not need to be waded through to answer a very specific query (p. 19–24).

Greatly similar to the same view I have written works expressing, the book sees industrial civilization as being burdened above all by too much centralization, e.g. oil refineries. This endangers civilization, and threatens collapse if something should later go wrong (p. 32, 33). For example, an electromagnetic pulse (EMP) resulting from a solar storm could cause serious damage as a result of the centralization of electrical infrastructure. Digital sabotage could also threaten such infrastructure (p. 34, 35).

The solution to this problem is decentralization, as “where centralization creates vulnerability, decentralization alleviates it” (p. 37). Solar cells are one example of decentralized power production (p. 37–40), but there is also much promise in home fuel production using such things as ethanol and biogas (p. 40–42). Beyond fuel, there is also much benefit that could come from decentralized, highly localized food production, even “labor-free”, and “using robots” (p. 42–45). These possibilities deserve maximum attention for the sake of world welfare, considering the increasing UN concerns about getting adequate food and energy supplies to the growing global population. There should not need to be a food vs. fuel debate, as the only acceptable solution can be to engineer solutions to both problems. An additional option for increasing food production is artificial meat, which should aim to replace the reliance on livestock. Reliance on livestock has an “intrinsic wastefulness” that artificial meat does not have, so it makes sense for artificial meat to become the cheapest option in the long run (p. 62–65). Perhaps stranger and more profound is the option of genetically enhancing humans to make better use of food and other resources (p. 271–274).

On a related topic, sequencing our own genome may be able to have “major impacts, from medicine to self-knowledge” (p. 46–51). However, the book does not contain mention of synthetic biology and the potential impacts of J. Craig Venter’s work, as explained in such works as Life at the Speed of Light. This could certainly be something worth adding to the story, if future editions of the book aim to include some additional detail.

At least related to synthetic biology is the book’s discussion of genetic engineering of plants to produce healthier or more abundant food. Alternatively, plants could be genetically programmed to extract metal compounds from the soil (p. 213–215). However, we must be aware that this could similarly lead to threats, such as “superweeds that overrun the world” similar to the flora in John Wyndam’s Day of the Triffids (p. 197–219). Synthetic biology products could also accidentally expose civilization to microorganisms with unknown consequences, perhaps even as dangerous as alien contagions depicted in fiction. On the other hand, they could lead to potentially unlimited resources, with strange vats of bacteria capable of manufacturing oil from simple chemical feedstocks. Indeed, “genetic engineering could be used to create organic prairies that are useful to humans” (p. 265), literally redesigning and upgrading our own environment to give us more resources.

The book advocates that politics should focus on long-term thinking, e.g. to deal with global warming, and should involve “synergistic cooperation” rather than “narrow national self-interest” (p. 66–75). This is a very important point, and may coincide with the complex prediction that nation states in their present form are flawed and too slow-moving. Nation-states may be increasingly incapable of meeting the challenges of an interconnected world in which national narratives produce less and less legitimate security thinking and transnational identities become more important.

Close to issues of security, The Human Race to the Future considers nuclear proliferation, and sees that the reasons for nuclear proliferation need to be investigated in more depth for the sake of simply by reducing incentives. To avoid further research, due to thinking that it has already been sufficiently completed, is “downright dangerous” (p. 89–94). Such a call is certainly necessary at a time when there is still hostility against developing countries with nuclear programs, and this hostility is simply inflammatory and making the world more dangerous. To a large extent, nuclear proliferation is inevitable in a world where countries are permitted to bomb one another because of little more than suspicions and fears.

Another area covered in this book that is worth celebrating is the AI singularity, which is described here as meaning the point at which a computer is sophisticated enough to design a more powerful computer than itself. While it could mean unlimited engineering and innovation without the need for human imagination, there are also great risks. For example, a “corporbot” or “robosoldier,” determined to promote the interests of an organization or defeat enemies, respectively. These, as repeatedly warned through science fiction, could become runaway entities that no longer listen to human orders (p. 83–88, 122–127).

A more distant possibility explored in Berleant’s book is the colonization of other planets in the solar system (p. 97–121, 169–174). There is the well-taken point that technological pioneers should already be trying to settle remote and inhospitable locations on Earth, to perfect the technology and society of self-sustaining settlements (Antarctica?) (p.106). Disaster scenarios considered in the book that may necessitate us moving off-world in the long term include a hydrogen sulfide poisoning apocalypse (p. 142–146) and a giant asteroid impact (p. 231–236)

The Human Race to the Future is a realistic and practical guide to the dilemmas fundamental to the human future. Of particular interest to general readers, policymakers and activists should be the issues that concern the near future, such as genetic engineering aimed at conservation of resources and the achievement of abundance.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published on April 22 in h+ Magazine

Interested in this subject? Subscribe to receive free CLUBOF.INFO articles by Email