Toggle light / dark theme

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.

Cheers!

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

AND FOR INSTANCE:

“… Technology is the savior for everything. That’s the point of this course. Technology is accelerating, everything is going to be good, technology is your friend … I think that’s a load of crap …” By Dr. Jonathan White

Back to the White Swan hardcore:

That discourse can be entertained at a forthcoming Renaissance, not now. Going against this idea will be outrageously counterproductive to ascertain the non-annihilation of Earth’s locals.

People who destroy, eternally beforehand, outrageous Black Swans, engaging into super-natural and preter-natural preparations for known and unknown Outliers, thus observing — in all practicality — the successful and prevailing White Swan and Transformative and Integrative Risk Management interdisciplinary problem-solving methodology, include:

(1.-) Sir Martin Rees PhD (cosmologist and astrophysicist), Astronomer Royal, Cambridge University Professor and former Royal Society President.

(2.-) Dr. Stephen William Hawking CH CBE FRS FRSA is an English theoretical physicist, cosmologist, author and Director of Research at the Centre for Theoretical Cosmology within the University of Cambridge. Formerly: Lucasian Professor of Mathematics at the University of Cambridge.

(3.-) Prof. Nick Bostrom Ph.D. is a Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, the reversal test, and consequentialism. He holds a PhD from the London School of Economics (2000). He is the founding director of both The Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.

(4.-) The US National Intelligence Council (NIC) [.…] The National Intelligence Council supports the Director of National Intelligence in his role as head of the Intelligence Community (IC) and is the IC’s center for long-term strategic analysis [.…] Since its establishment in 1979, the NIC has served as a bridge between the intelligence and policy communities, a source of deep substantive expertise on intelligence issues, and a facilitator of Intelligence Community collaboration and outreach [.…] The NIC’s National Intelligence Officers — drawn from government, academia, and the private sector—are the Intelligence Community’s senior experts on a range of regional and functional issues.

(5.-) U.S. Homeland Security’s FEMA (Federal Emergency Management Agency).

(6.-) The CIA or any other U.S. Government agencies.

(7.-) Stanford Research Institute (now SRI International).

(8.-) GBN (Global Business Network).

(9.-) Royal Dutch Shell.

(10.-) British Doomsday Preppers.

(11.-) Canadian Doomsday Preppers.

(12.-) Australian Doomsday Preppers

(13.-) American Doomsday Preppers.

(14.-) Disruptional Singularity Book (ASIN: B00KQOEYLG).

(15.-) Scientific Prophets of Doom at https://www.youtube.com/watch?v=9bUe2-7jjtY

White Swans are always getting prepared for Unknown and Known Outliers, and MOST FLUIDLY changing the theater of operation by permanently updating and upgrading the designated preparations.

Authored By Copyright Mr. Andres Agostini
White Swan Book Author
www.linkedin.com/in/andresagostini
www.amazon.com/author/Agostini

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

If the controversy over genetically modified organisms (GMOs) tells us something indisputable, it is this: GMO food products from corporations like Monsanto are suspected to endanger health. On the other hand, an individual’s right to genetically modify and even synthesize entire organisms as part of his dietary or medical regimen could someday be a human right.
The suspicion that agri-giant companies do harm by designing crops is legitimate, even if evidence of harmful GMOs is scant to absent. Based on their own priorities and actions, we should have no doubt that self-interested corporations disregard the rights and wellbeing of local producers and consumers. This makes agri-giants producing GMOs harmful and untrustworthy, regardless of whether individual GMO products are actually harmful.
Corporate interference in government of the sort opposed by the Occupy Movement is also connected with the GMO controversy, as the US government is accused of going to great lengths to protect “stakeholders” like Monsanto via the law. This makes the GMO controversy more of a business and political issue rather than a scientific one, as I argued in an essay published at the Institute for Ethics and Emerging Technologies (IEET). Attacks on science and scientists themselves over the GMO controversy are not justified, as the problem lies solely with a tiny handful of businessmen and corrupt politicians.
An emerging area that threatens to become as controversial as GMOs, if the American corporate stranglehold on innovation is allowed to shape its future, is synthetic biology. In his 2014 book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, top synthetic biologist J. Craig Venter offers powerful words supporting a future shaped by ubiquitous synthetic biology in our lives:

“I can imagine designing simple animal forms that provide novel sources of nutrients and pharmaceuticals, customizing human stem cells to regenerate a damaged, old, or sick body. There will also be new ways to enhance the human body as well, such as boosting intelligence, adapting it to new environments such as radiation levels encountered in space, rejuvenating worn-out muscles, and so on”

In his own words, Venter’s vision is no less than “a new phase of evolution” for humanity. It offers what Venter calls the “real prize”: a family of designer bacteria “tailored to deal with pollution or to absorb excess carbon dioxide or even meet future fuel needs”. Greater than this, the existing tools of synthetic biology are transhumanist in nature because they create limitless means for humans to enhance themselves to deal with harsher environments and extend their lifespans.
While there should be little public harm in the eventual ubiquity of the technologies and information required to construct synthetic life, the problems of corporate oligopoly and political lobbying are threatening synthetic biology’s future as much as they threaten other facets of human progress. The best chance for an outcome that will be maximally beneficial for the world relies on synthetic biology taking a radically different direction to GM. That alternative direction, of course, is an open source future for synthetic biology, as called for by Canadian futurist Andrew Hessel and others.
Calling himself a “catalyst for open-source synthetic biology”, Hessel is one of the growing number of experts who reject biotechnology’s excessive use of patents. Nature notes that his Pink Army Cooperative venture relies instead on “freely available software and biological parts that could be combined in innovative ways to create individualized cancer treatments — without the need for massive upfront investments or a thicket of protective patents”.
While offering some support to the necessity of patents, J. Craig Venter more importantly praises the annual International Genetically Engineered Machine (iGEM) competition in his book as a means of encouraging innovation. He specifically names the Registry of Standard Biological Parts, an open source library from which to obtain BioBricks, and describes this as instrumental for synthetic biology innovation. Likened to bricks of Lego that can be snapped together with ease by the builder, BioBricks are prepared standard pieces of genetic code, with which living cells can be newly equipped and operated as microscopic chemical factories. This has enabled students and small companies to reprogram life itself, taking part in new discoveries and innovations that would have otherwise been impossible without the direct supervision of the world’s best-trained teams of biologists.
There is a similar movement towards popular synthetic biology by the name of biohacking, promoted by such experts as Ellen Jorgensen. This compellingly matches the calls for greater autonomy for individuals and small companies in medicine and human enhancement. Unfortunately, despite their potential to greatly empower consumers and farmers, such developments have not yet found resonance with anti-GMO campaigners, whose outright rejection of biotechnology has been described as anti-science and “bio-luddite” by techno-progressives. It is for this reason that emphasizing the excellent potential of biotechnology for feeding and fuelling a world plagued by dwindling resources is important, and a focus on the ills of big business rather than imagined spectres emerging from science itself is vital.
The concerns of anti-GMO activists would be addressed better by offering support to an alternative in the form of “do-it-yourself” biotechnology, rather than rejecting sciences and industries that are already destined to be a fundamental part of humanity’s future. What needs to be made is a case for popular technology, in hope that we can reject the portrayal of all advanced technology as an ally of powerful states and corporations and instead unlock its future as a means of liberation from global exploitation and scarcity.
While there are strong arguments that current leading biotechnology companies feel more secure and perform better when they retain rigidly enforced intellectual property rights, Andrew Hessel rightly points out that the open source future is less about economic facts and figures than about culture. The truth is that there is a massive cultural transition taking place. We can see a growing hostility to patents, and an increasing popular enthusiasm for open source innovation, most promisingly among today’s internet-borne youth.
In describing a cultural transition, Hessel is acknowledging the importance of the emerging body of transnational youth whose only ideology is the claim that information wants to be free, and we find the same culture reflected in the values of organizations like WikiLeaks. Affecting every facet of science and technology, the elite of today’s youth are crying out for a more open, democratic, transparent and consumer-led future at every level.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published at h+ Magazine on 21 August 2014

Written By: — Singularity Hub
http://cdn.singularityhub.com/wp-content/uploads/2014/07/universe-comes-to-know-itself-1.jpg
In his latest video, host of National Geographic’s Brain Games and techno-poet, Jason Silva, explores the universe’s tendency to self-organize. Biology, he says, seems to have agency and directionality toward greater complexity, and humans are the peak.

“It’s like human beings seem to be the cutting edge,” Silva says. “The evolutionary pinnacle of self-awareness becoming aware of its becoming.”

Read more

Dylan Love — Business Insider

“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”

These are the words of Louis Del Monte, physicist, entrepreneur, and author of “The Artificial Intelligence Revolution.” Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world’s combined human intelligence too.

Read more

By Clément Vidal — Vrije Universiteit Brussel, Belgium.

I am happy to inform you that I just published a book which deals at length with our cosmological future. I made a short book trailer introducing it, and the book has been mentioned in the Huffington Post and H+ Magazine.

Inline image 1
About the book:
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end, or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe’s deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution, and intelligence from a novel cosmological framework that should stir debate for years to come.
About the author:
Dr. Clément Vidal is a philosopher with a background in logic and cognitive sciences. He is co-director of the ‘Evo Devo Universe’ community and founder of the ‘High Energy Astrobiology’ prize. To satisfy his intellectual curiosity when facing the big questions, he brings together many areas of knowledge such as cosmology, physics, astrobiology, complexity science, evolutionary theory and philosophy of science.
http://clement.vidal.philosophons.com

You can get 20% off with the discount code ‘Vidal2014′ (valid until 31st July)!

Uploading the content of one’s mind, including one’s personality, memories and emotions, into a computer may one day be possible, but it won’t transfer our biological consciousness and won’t make us immortal.

Uploading one’s mind into a computer, a concept popularized by the 2014 movie Transcendence starring Johnny Depp, is likely to become at least partially possible, but won’t lead to immortality. Major objections have been raised regarding the feasibility of mind uploading. Even if we could surpass every technical obstacle and successfully copy the totality of one’s mind, emotions, memories, personality and intellect into a machine, that would be just that: a copy, which itself can be copied again and again on various computers.

THE DILEMMA OF SPLIT CONSCIOUSNESS

Neuroscientists have not yet been able to explain what consciousness is, or how it works at a neurological level. Once they do, it is might be possible to reproduce consciousness in artificial intelligence. If that proves feasible, then it should in theory be possible to replicate our consciousness on computers too. Or is that jumpig to conclusions ?

Once all the connections in the brain are mapped and we are able to reproduce all neural connections electronically, we will also be able run a faithful simulation of our brain on a computer. However, even if that simulation happens to have a consciousness of its own, it will never be quite like our own biological consciousness. For example, without hormones we couldn’t feel emotions like love, jealously or attachment. (see Could a machine or an AI ever feel human-like emotions ?)

Some people think that mind uploading necessarily requires to leave one’s biological body. But there is no conscensus about that. Uploading means copying. When a file is uploaded on the Internet, it doesn’t get deleted at the source. It’s just a copy.

The best analogy to understand that is cloning. Identical twins are an example of human clones that already live among us. Identical twins share the same DNA, yet nobody would argue that they also share a single consciousness.

It will be easy to prove that hypothesis once the technology becomes available. Unlike Johnny Depp in Transcend, we don’t have to die to upload our mind to one or several computers. Doing so won’t deprive us of our biological consciousness. It will just be like having a mental clone of ourself, but we will never feel like we are inside the computer, without affecting who we are.

If the conscious self doesn’t leave the biologically body (i.e. “die”) when transferring mind and consciousness, it would basically mean that that individual would feel in two places at the same time: in the biological body and in the computer. That is problematic. It’s hard to conceive how that could be possible since the very essence of consciousness is a feeling of indivisible unity.

If we want to avoid this problem of dividing the sense of self, we must indeed find a way to transfer the consciousness from the body to the computer. But this would assume that consciousness is merely some data that can be transferred. We don’t know that yet. It could be tied to our neurons or to very specific atoms in some neurons. If that was the case, destroying the neurons would destroy the consciousness.

Even assuming that we found a way to transfer the consciousness from the brain to a computer, how could we avoid consciousness being copied to other computers, recreating the philosophical problem of splitting the self. That would actually be much worse since a computerized consciousness could be copied endless times. How would you then feel a sense of unified consciousness ?

Since mind uploading won’t preserve our self-awareness, the feeling that we are ourself and not someone else, it won’t lead to immortality. We’ll still be bound to our bodies, but life expectancy for transhumanists and cybernetic humans will be considerably extended.

IMMORTALITY ISN’T THE SAME AS EXTENDED LONGEVITY

Immortality is a confusing term since it implies living forever, which is impossible since nothing is eternal in our universe, not even atoms or quarks. Living for billions of years, while highly improbable in itself, wouldn’t even be close to immortality. It may seem like a very large number compared to our short existence, but compared to eternity (infinite time), it isn’t much longer than 100 years.

Even machines aren’t much longer lived than we are. Actually modern computers tend to have much shorter life spans than humans. A 10-year old computer is very old indeed, as well as slower and more prone to technical problems than a new computer. So why would we think that transferring our mind to a computer would grant us greatly extended longevity ?

Even if we could transfer all our mind’s data and consciousness an unlimited number of times onto new machines, that won’t prevent the machine currently hosting us from being destroyed by viruses, bugs, mechanical failures or outright physical destruction of the whole hardware, intentionally, accidentally or due to natural catastrophes.

In the meantime, science will slow down, stop and even reverse the aging process, enabling us to live healthily for a very long time by today’s standards. This is known as negligible senescence. Nevertheless, cybernetic humans with robotic limbs and respirocytes will still die in accidents or wars. At best we could hope to living for several hundreds or thousands years, assuming that nothing kills us before.

As a result, there won’t be that much differences between living inside a biological body and a machine. The risks will be comparable. Human longevity will in all likelihood increase dramatically, but there simply is no such thing as immortality.

CONCLUSION

Artificial Intelligence could easily replicate most of processes, thoughts, emotions, sensations and memories of the human brain — with some reservations on some feelings and emotions residing outside the brain, in the biological body. An AI might also have a consciousness of its own. Backing up the content of one’s mind will most probably be possible one day. However there is no evidence that consciousness or self-awareness are merely information that can be transferred since consciousness cannot be divided in two or many parts.

Consciousness is most likely tied to neurons in a certain part of the brain (which may well include the thalamus). These neurons are maintained throughout life, from birth to death, without being regenerated like other cells in the body, which explains the experienced feeling of continuity.

There is not the slightest scientific evidence of a duality between body and consciousness, or in other words that consciousness could be equated with an immaterial soul. In the absence of such duality, a person’s original consciousness would cease to exist with the destruction of the neurons in his/her brain responsible for consciousness. Unless one believes in an immaterial, immortal soul, the death of one’s brain automatically results in the extinction of consciousness. While a new consciousness could be imitated to perfection inside a machine, it would merely be a clone of the person’s consciousness, not an actual transfer, meaning that that feeling of self would not be preserved.

———

This article was originally published on Life 2.0.

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

MERE COMPUTER OR MULTI-SENSORY ROBOT ?

Computers are undergoing a profound mutation at the moment. Neuromorphic chips have been designed on the way the human brain works, modelling the massively parallel neurological processeses using artificial neural networks. This will enable computers to process sensory information like vision and audition much more like animals do. Considerable research is currently devoted to create a functional computer simulation of the whole human brain. The Human Brain Project is aiming to achieve this for 2016. Does that mean that computers will finally experience feelings and emotions like us ? Surely if an AI can simulate a whole human brain, then it becomes a sort of virtual human, doesn’t it ? Not quite. Here is why.

There is an important distinction to be made from the onset between an AI residing solely inside a computer with no sensor at all, and an AI that is equipped with a robotic body and sensors. A computer alone would have a range of emotions far more limited as it wouldn’t be able to physically interact with its environment. The more sensory feedback a machine could receive, the wide the range of feelings and emotions it will be able to experience. But, as we will see, there will always be fundamental differences between the type of sensory feedback that a biological body and a machine can receive.

Here is an illustration of how limited an AI is emotionally without a sensory body of its own. In animals, fear, anxiety or phobias are evolutionary defense mechanisms aimed at raising our vigilence in the face of danger. That is because our bodies work with biochemical signals involving hormones and neurostransmitters sent by the brain to prompt a physical action when our senses perceive danger. Computers don’t work that way. Without sensors feeding them information about their environment, computers wouldn’t be able to react emotionally.

Even if a computer could remotely control machines like robots (e.g. through the Internet) that are endowed with sensory perception, the computer itself wouldn’t necessarily care if the robot (a discrete entity) is harmed or destroyed, since it would have no physical consequence on the AI itself. An AI could fear for its own well-being and existence, but how is it supposed to know that it is in danger of being damaged or destroyed ? It would be the same as a person who is blind, deaf and whose somatosensory cortex has been destroyed. Without feeling anything about the outside world, how could it perceive danger ? That problem disappear once the AI is given at least one sense, like a camera to see what is happening around itself. Now if someone comes toward the computer with a big hammer, it will be able to fear for its existence !

WHAT CAN MACHINES FEEL ?

In theory, any neural process can be reproduced digitally in a computer, even though the brain is mostly analog. This is hardly a concern, as Ray Kurzweil explained in his book How to Create a Mind. However it does not always make sense to try to replicate everything a human being feel in a machine.

While sensory feelings like heat, cold or pain could easily be felt from the environment if the machine is equipped with the appropriate sensors, this is not the case for other physiological feelings like thirst, hunger, and sleepiness. These feelings alert us of the state of our body and are normally triggered by hormones such as vasopressin, ghrelin, or melatonin. Since machines do not have a digestive system nor hormones, it would be downright nonsensical to try to emulate such feelings.

Emotions do not arise for no reason. They are either a reaction to an external stimulus, or a spontaneous expression of an internal thought process. For example, we can be happy or joyful because we received a present, got a promotion or won the lottery. These are external causes that trigger the emotions inside our brain. The same emotion can be achieved as the result of an internal thought process. If I manage to find a solution to a complicated mathematical problem, that could make me happy too, even if nobody asked me to solve it and it does not have any concrete application in my life. It is a purely intellectual problem with no external cause, but solving it confers satisfaction. The emotion could be said to have arisen spontaneously from an internalized thought process in the neocortex. In other words, solving the problem in the neocortex causes the emotion in another part of the brain.

An intelligent computer could also prompt some emotions based on its own thought processes, just like the joy or satisfaction experienced by solving a mathematical problem. In fact, as long as it is allowed to communicate with the outside world, there is no major obstacle to computers feeling true emotions of its own like joy, sadness, surprise, disappointment, fear, anger, or resentment, among others. These are all emotions that can be produced by interactions through language (e.g. reading, online chatting) with no need for physiological feedback.

Now let’s think about how and why humans experience a sense of well being and peace of mind, two emotions far more complex than joy or anger. Both occur when our physiological needs are met, when we are well fed, rested, feel safe, don’t feel sick, and are on the right track to pass on our genes and keep our offspring secure. These are compound emotions that require other basic emotions as well as physiological factors. A machine without physiological needs cannot get sick and that does not need to worry about passing on its genes to posterity, and therefore will have no reason to feel that complex emotion of ‘well being’ the way humans do. For a machine well being may exist but in a much more simplified form.

Just like machines cannot reasonably feel hunger because they do not eat, replicating emotions on machines with no biological body, no hormones, and no physiological needs can be tricky. This is the case with social emotions like attachment, sexual emotions like love, and emotions originating from evolutionary mechanisms set in the (epi)genome. This is what we will explore in more detail below.

FEELINGS ROOTED IN THE SENSES AND THE VAGUS NERVE

What really distinguishes intelligent machines from humans and animals is that the former do not have a biological body. This is essentially why they could not experience the same range of feelings and emotions as we do, since many of them inform us about the state of our biological body.

An intelligent robot with sensors could easily see, hear, detect smells, feel an object’s texture, shape and consistency, feel pleasure and pain, heat and cold, and the like. But what about the sense of taste ? Or the effects of alcohol on the mind ? Since machines do not eat, drink and digest, they wouldn’t be able to experience these things. A robot designed to socialize with humans would be unable to understand and share the feelings of gastronomical pleasure or inebriety with humans. They could have a theoretical knowledge of it, but not a first-hand knowledge from an actually felt experience.

But the biggest obstacle to simulating physical feelings in a machine comes from the vagus nerve, which controls such varied things as digestion, ‘gut feelings’, heart rate and sweating. When we are scared or disgusted, we feel it in our guts. When we are in love we feel butterflies in our stomach. That’s because of the way our nervous system is designed. Quite a few emotions are felt through the vagus nerve connecting the brain to the heart and digestive system, so that our body can prepare to court a mate, fight an enemy or escape in the face of danger, by shutting down digestion, raising adrenaline and increasing heart rate. Feeling disgusted can help us vomit something that we have swallowed and shouldn’t have.

Strong emotions can affect our microbiome, the trillions of gut bacteria that help us digest food and that secrete 90% of the serotonin and 50% of the dopamine used by our brain. The thousands of species of bacteria living in our intestines can vary quickly based on our diet, but it has been demonstrated that even emotions like stress, anxiety, depression and love can strongly affect the composition of our microbiome. This is very important because of the essential role that gut bacteria play in maintaining our brain functions. The relationship between gut and brain works both ways. The presence or absence of some gut bacteria has been linked to autism, obsessive-compulsive disorder and several other psychological conditions. What we eat actually influence the way the think too, by changing our gut flora, and therefore also the production of neurotransmitters. Even our intuition is linked to the vagus nerve, hence the expression ‘gut feeling’.

Without a digestive system, a vagus nerve and a microbiome, robots would miss a big part of our emotional and psychological experience. Our nutrition and microbiome influence our brain far more than most people suspect. They are one of the reasons why our emotions and behaviour are so variable over time (in addition to maturity; see below).

SICKNESS, FATIGUE, SLEEP AND DREAMS

Another key difference between machines and humans (or animals) is that our emotions and thoughts can be severely affected by our health, physical condition and fatigue. Irritability is often an expression of mental or physical exhaustion caused by a lack of sleep or nutrients, or by a situation that puts excessive stress on mental faculties and increases our need for sleep and nutrients. We could argue that computers may overheat if used too intensively, and may also need to rest. That is not entirely true if the hardware is properly designed with an super-efficient cooling system, and a steady power supply. New types of nanochips may not produce enough heat to have any heating problem at all.

Most importantly machines don’t feel sick. I don’t mean just being weakened by a disease or feeling pain, but actually feeling sick, such as indigestion, nausea (motion sickness, sea sickness), or feeling under the weather before tangible symptoms appear. These aren’t enviable feelings of course, but the point is that machines cannot experience them without a biological body and an immune system.

When tired or sick, not only do we need to rest to recover our mental faculties and stabilize our emotions, we also need to dream. Dreams are used to clear our short-term memory cache (in the hippocampus), to replete neurotransmitters, to consolidate memories (by myelinating synapses during REM sleep), and to let go of the day’s emotions by letting our neurons firing up freely. Dreams also allow a different kind of thinking free of cultural or professional taboos that increase our creativity. This is why we often come up with great ideas or solutions to our problems during our sleep, and notably during the lucid dreaming phase.

Computers cannot dream and wouldn’t need to because they aren’t biological brains with neurostransmitters, stressed out neurons and synapses that need to get myelinated. Without dreams, an AI would nevertheless loose an essential component of feeling like a biological human.

EMOTIONS ROOTED IN SEXUALITY

Being in love is an emotion that brings a male and a female individual (save for some exceptions) of the same species together in order to reproduce and raise one’s offspring until they grow up. Sexual love is caused by hormomes, but is not merely the product of hormonal changes in our brain. It involves changes in the biochemistry of our whole body and can even lead to important physiological effects (e.g. on morphology) and long-term behavioural changes. Clearly sexual love is not ‘just an emotion’ and is not purely a neurological process either. Replicating the neurological expression of love in an AI would simulate the whole emotion of love, but only one of its facets.

Apart from the issue of reproducing the physiological expresion of love in a machine, there is also the question of causation. There is a huge difference between an artificially implanted/simulated emotion and one that is capable of arising by itself from environmental causes. People can fall in love for a number of reasons, such as physical attraction and mental attraction (shared interests, values, tastes, etc.), but one of the most important in the animal world is genetic compatibility with the prospective mate. Individuals who possess very different immune systems (HLA genes), for instance, tend to be more strongly attracted to each other and feel more ‘chemistry’. We could imagine that a robot with a sense of beauty and values could appreciate the looks and morals of another robot or a human being and even feel attracted (platonically). Yet a machine couldn’t experience the ‘chemistry’ of sexual love because it lacks hormones, genes and other biochemical markers required for sexual reproduction. In other words, robots could have friends but not lovers, and that make sense.

A substantial part of the range of human emotions and behaviours is anchored in sexuality. Jealousy is another good example. Jealousy is intricatedly linked to love. It is the fear of losing one’s loved one to a sexual rival. It is an innate emotion whose only purpose is to maximize our chances of passing our genes through sexual reproduction by warding off competitors. Why would a machine, which does not need to reproduce sexually, need to feel that ?

One could wonder what difference it makes whether a robot can feel love or not. They don’t need to reproduce sexually, so who cares ? If we need intelligent robots to work with humans in society, for example by helping to take care of the young, the sick and the elderly, they could still function as social individuals without feeling sexual love, wouldn’t they ? In fact you may not want a humanoid robot to become a sexual predator, especially if working with kids ! Not so fast. Without a basic human emotion like love, an AI simply cannot think, plan, prioritize and behave the same way as humans do. Their way of thinking, planning and prioritizing would rely on completely different motivations. For example, young human adults spend considerable time and energy searching for a suitable mate in order to reproduce.

A robot endowed with an AI of equal or greater than human intelligence, lacking the need for sexual reproduction would behave, plan and prioritize its existence very differently than humans. That is not necessarily a bad thing, for a lot of conflicts in human society are caused by sex. But it also means that it could become harder for humans to predict the behaviour and motivation of autonomous robots, which could be a problem once they become more intelligent than us in a few decades. The bottom line is that by lacking just one essential human emotion (let alone many), intelligent robots could have very divergent behaviours, priorities and morals from humans. It could be different in a good way, but we can’t know that for sure at present since they haven’t been built yet.

TEMPERAMENT AND SOCIABILITY

Humans are social animals. They typically, though not always (e.g. some types of autism), seek to belong to a group, make friends, share feelings and experiences with others, gossip, seek approval or respect from others, and so on. Interestingly, a person’s sociability depends on a variety of factors not found in machines, including gender, age, level of confidence, health, well being, genetic predispositions, and hormonal variations.

We could program an AI to mimick a certain type of human sociability, but it wouldn’t naturally evolve over time with experience and environmental factors (food, heat, diseases, endocrine disruptors, microbiome). Knowledge can be learned but not spontaneous reactions to environmental factors.

Humans tend to be more sociable when the weather is hot and sunny, when they drink alcohol and when they are in good health. A machine has no need to react like that, unless once again we intentionally program it to resemble humans. But even then it couldn’t feel everything we feel as it doesn’t eat, doesn’t have gut bacteria, doesn’t get sick, and doesn’t have sex.

MATERNAL WARMTH AND FEELING OF SAFETY IN MAMMALS

Humans, like all mammals, have an innate need for maternal warmth in childhood. An experiment was conducted with newborn mice taken away from their biological mother. The mice were placed in a cage with two dummy mothers. One of them was warm, fluffy and cosy, but did not have milk. The other one was hard, cold and uncosy but provided milk. The baby mice consistently chose the cosy one, demonstrating that the need for comfort and safety trumps nutrition in infant mammals. Likewise, humans deprived of maternal (or paternal) warmth and care as babies almost always experience psychological problems growing up.

In addition to childhood care, humans also need the feeling of safety and cosiness provided by the shelter of one’s home throughout life. Not all animals are like that. Even as hunter-gatherers or pastoralist nomads, all Homo sapiens need a shelter, be it a tent, a hut or a cave.

How could we expect that kind of reaction and behaviour in a machine that does not need to grow from babyhood to adulthood, cannot know what it is to have parents or siblings, nor need to feel reassured by maternal warmth, and do not have a biological compulsion to seek a shelter ? Without those feelings, it is extremely doubtful that a machine could ever truly understand and empathize completely with humans.

These limitations mean that it may be useless to try to create intelligent, sentient and self-aware robots that truly think, feel and behave like humans. Reproducing our intellect, language, and senses (except taste) are the easy part. Then comes consciousness, which is harder but still feasible. But since our emotions and feelings are so deeply rooted in our biological body and its interaction with its environment, the only way to reproduce them would be to reproduce a biological body for the AI. In other words, we are not talking about a creating a machine anymore, but genetically engineering a new life being, or using neural implants for existing humans.

MACHINES DON’T MATURE

The way human experience emotions evolves dramatically from birth to adulthood. Children are typically hyperactive and excitable and are prone to making rash decisions on impulse. They cry easily and have difficulties containing and controlling their emotions and feelings. As we mature, we learn more or les successfully to master our emotions. Actually controlling one’s emotions gets easier over time because with age the number of neurons in the brain decreases and emotions get blunter and vital impulses weaker.

The expression of one’s emotions is heavily regulated by culture and taboos. That’s why speakers of Romance languages will generally express their feelings and affection more freely than, say, Japanese or Finnish people. Would intelligent robots also follow one specific human culture, or create a culture on their own ?

Sex hormones also influence the way we feel and express emotions. Male testosterone makes people less prone to emotional display, more rational and cold, but also more aggressive. Female estrogens increase empathy, affection and maternal instincts of protection and care. A good example of the role of biology on emotions is the way women’s hormonal cycles (and the resulting menstruations) affect their emotions. One of the reasons that children process emotions differently than adults is that have lower sex hormomes. As people age, hormonal levels decrease (not just sex hormones), making us more mellow.

Machines don’t mature emotionally, do not go through puberty, do not have hormonal cycles, nor undergo hormonal change based on their age, diet and environment. Artificial intelligence could learn from experience and mature intellectually, but not mature emotionally like a child becoming an adult. This is a vital difference that shouldn’t be underestimated. Program an AI to have the emotional maturity of a 5-year old and it will never grow up. Children (especially boys) cannot really understand the reason for their parents’ anxiety toward them until they grow up and have children of their own, because they lack the maturity and sexual hormones associated with parenthood.

We could always run a software emulating changes in AI maturity over time, but they would not be the result of experiences and interactions with the environment. It may not be useful to create robots that mature like us, but the argument debated here is whether machines could ever feel exactly like us or not. This argument is not purely rhetorical. Some transhumanists wish to be able one day to upload their mind onto a computer and transfer our consciouness (which may not be possible for a number of reasons). Assuming that it becomes possible, what if a child or teenager decides to upload his or her mind and lead a new robotic existence ? One obvious problem is that this person would never fulfill his/her potential for emotional maturity.

The loss of our biological body would also deprive us of our capacity to experience feelings and emotions bound to our physiology. We may be able to keep those already stored in our memory, but we may never dream, enjoy food, or fall in love again.

SUMMARY & CONCLUSION

What emotions could machines experience ?

Even though many human emotions are beyond the range of machines due to their non-biological nature, some emotions could very well be felt by an artificial intelligence. These include, among others:

  • Joy, satisfaction, contentment
  • Disappointment, sadness
  • Surprise
  • Fear, anger, resentment
  • Friendship
  • Appreciation for beauty, art, values, morals, etc.

What emotions and feelings would machines not be able to experience ?

The following emotions and feelings could not be wholly or faithfully experienced by an AI, even with a sensing robotic body, beyond mere implanted simulation.

  • Hunger, thirst, drunkenness, gastronomical enjoyment
  • Various feelings of sickness, such as nausea, indigestion, motion sickness, sea sickness, etc.
  • Sexual love, attachment, jealousy
  • Maternal/paternal instincts towards one’s own offspring
  • Fatigue, sleepiness, irritability
  • Dreams and associated creativity

In addition, machine emotions would run up against the following issues that would prevent them to feel and experience the world truly like humans.

  • Machines wouldn’t mature emotionally with age.
  • Machines don’t grow up and don’t go through puberty to pass from a relatively asexual childhood stage to a sexual adult stage
  • Machines cannot fall in love (+ associated emotions, behaviours and motivations) as they aren’t sexual beings
  • Being asexual, machines are genderless and therefore lack associated behaviour and emotions caused by male and female hormones.
  • Machines wouldn’t experience gut feelings (fear, love, intuition).
  • Machine emotions, intellect, psychology and sociability couldn’t vary with nutrition and microbiome, hormonal changes, or environmental factors like the weather.

It is not completely impossible to bypass these obstacles, but that would require to create a humanoid machine that not only possess human-like intellectual faculties, but also an artificial body that can eat and digest and with a digestive system connected to the central microprocessor in the same way as our vagus nerve is connected to our brain. That robot would also need a gender and a capacity to have sex and feel attracted to other humanoid robots or humans based on a predefined programming that serves as an alternative to a biological genome to create a sense of ‘sexual chemistry’ when matched with an individual with a compatible “genome”. It would necessitate artificial hormones to regulate its hunger, thirst, sexual appetite, homeostasis, and so on.

Although we lack the technology and in-depth knowledge of the human body to consider such an ambitious project any time soon, it could eventually become possible one day. One could wonder whether such a magnificent machine could still be called a machine, or simply an artificially made life being. I personally don’t think it should be called a machine at that point.

———

This article was originally published on Life 2.0.

- @ClubOfINFOTransEvolution: The Coming Age of Human Deconstruction (2014) is an alarmist book by Daniel Estulin, a commentator on the secretive Bilderberg Group who is well-liked by many – in particular on conspiracy theorist forums. Essentially, this should be regarded as conspiracy theory material. My refutations of it are too many to cram into this review, so I will mainly focus on what the book itself says.

Daniel Estulin connects disparate events and sources to depict an elaborate conspiracy. The main starting claim of the book is a link between the 2005 Bilderberg Conference and the 2006 document Strategic Trends 2007–2036 prepared by the British government (p. 1–12). Estulin claims that the latter report’s predictions betray “Promethean” plans that represent “designs by the Bilderberg Group”.
The book makes the allegation that the economic pressure on the world today “is being done on purpose, absolutely on purpose. The reason is because our current corporate empire knows that “progress of humanity” means their imminent demise”. The “powers-that-be” destroy nation-states to maintain power, and “this is by design” (p. 13). Estulin decries international money flows and globalization, and promotes “physical economy” instead. To make a long story short, he describes the apparatus of globalization, integration, etc. as a clash between the nation-state and global oligarchy and frames this as a classic battle between good and evil respectively (p. 13–35). “The ideas of a nation-state republic and progress” are intrinsically connected (p. 34), Estulin argues, putting forward his preference for the old Jacobin ideological script of the Nineteenth Century rather than modern discourses on integration and communication.
In his preference for the nation-state, Estulin attacks the WTO’s record on free trade, and makes criticisms that are provisionally valid. However, he confuses the tendency for weaker nations to be exploited through free trade with a conspiracy against the nation-state. The WTO’s commitment to what it calls free trade, a commitment to “One World, One Market”, reflects “anti-nation-state intent”, Estulin argues (p. 37–38).
Although they attach too much agency to global “elites”, Estulin’s description of the way international trade on agriculture has been manipulated to disadvantage poor nations and advantage rich nations (p. 38–49) agrees with already powerful sociology theories of “free trade imperialism” and the larger humanitarian message of the alter-globalization movement. Estulin quotes William Engdahl’s The Seeds of Destruction at length to argue against the destructive local impacts of global agribusiness (p. 47–53).
Estulin interprets the spread of the pharmaceuticals industry as evidence of the elite seeking a docile and controlled population, “massive drugging of the population”, “controlled chaos”, and even goes as far as to say that GMOs will be poisoning everyone on the planet and finally kill 3 billion people indiscriminately (p. 63–68). More puzzlingly than what has already been specified, Estulin blames the Club of Rome thesis itself (which predicted the depletion of resources leading to economic collapse) for making an enemy of humanity and submitting a plan for no less than the deliberate depopulation of the Earth (p. 17–20).
Synthetic biology is not spared from criticism by Estulin. He immediately labels it as “founded on the ambition that one day it will be possible to design and manufacture a human being” (p. 69). For the record, nowhere in the field of synthetic biology has anyone actually advocated manufacturing human beings, and nor does such an ambition coincide with the conspiracy theory about depopulating the Earth. Estulin further confuses science with pseudoscience, stating “genetics, as defined by the Rockefeller Foundation, would constitute the new face of eugenics” (p. 71). “Ultimately,” Estulin writes, “this is about taking control of nature, redesigning it and rebuilding it to serve the whims of the controlling elite” (p. 72).
In further arguments against the perceived “elite”, Estulin demonizes space exploration, saying “the elite are planning, at least, a limited exodus from the Planet Earth. Why? What do they know that we don’t? Nuclear wars? Nanowars? Bacteriological wars?” (p. 123) Chapter 4, although titled “space exploration”, is dedicated to explaining the deadly potential of future security and defense technologies when used by regimes against their own people (p. 115–156).
Then, we get to transhumanism (only in the last chapter.) The chapter alleges that the US government thought up a transhumanist agenda in 2001 as a strategic military contingency – in particular the Russian 2045 Movement. According to Estulin, the transhumanist conspiracy in its present form comes from a conference, “The Age of Transitions” (p. 159–161). Using little more than the few links between political or business figures and transhumanism as evidence, he alleges that transhumanism is “steered by the elite” and that “we, the people, have not been invited” (p. 161–162).
The movie Avatar (2010) by James Cameron (mistakenly named as David Cameron in Estulin’s book), is connected by Estulin with the 2045 movement’s enthusiasm for humans becoming “avatars” by means of being uploaded as digital beings (p. 162–164). Further, the movie Prometheus (2012) by Ridley Scott reflects the “future plans of the elite” according to Estulin (p. 165–170). However, he does not analyze either movie, and fails to note that Peter Weyland (the “elite”) in the movie is actually a vile character and his search for life-extension is a product of his greed and vanity (this is not exactly a glamorization of the search for life extension). If anything, Prometheus joins a long tradition of literature and film that encourages people not to trust transhumanism and life extension and to fear where such movements could lead.
Exaggerated connections and resemblances between disparate conferences, such as the US government and Russian longevity enthusiasts, are put forward as evidence of a conspiracy (p. 170). Then, we get to Estulin’s real complaint against transhumanism:

“Many people have trouble understanding what the true transhumanism movement is about, and why it’s so evil. After all, it’s just about improving our quality of life, right? Or is transhumanism about social control on a gigantic scale?” (p. 172–173)

Estulin also asserts:

“Transhumanism fills people’s hopes and minds with dreams of becoming superhuman, but the fact of the matter is that the true goal is the removal of that pesky, human free will itself.” (p. 186)

Estulin (and Engdahl’s) belief in a eugenic “depopulation” agenda (p. 57), as hideous as the crimes of Nazism, in Monsanto’s work is an example of a conspiracy theory appealing to irrational fears. Both of these writers are confusing corporate greed and monopolistic priorities with actual wicked and genocidal intent, and assigning motives that do not exist. They are confusing structural evils in the world system with actions by evil men gathered in dark rooms. Estulin also conveniently misses out the fact that the indiscriminate poisoning of all life by changing the DNA of every living thing would also threaten the conspirators and their own families. I guess we must assume that the conspirators are also a suicide cult, of the same breed as Jim Jones’ “People’s Temple”.
At the end of the book’s tirade about synthetic biology being a ticket for the elite to control all life, Estulin reverts back to a question very prominent in mainstream fora: “can we trust the major corporations with the right thing?” (p. 74). The answer from almost everyone would be Nobut not for any of the reasons Estulin has put forward. We can’t trust the major corporations, because their only interest is endless profit in the near term, and such profit is maximized by their ability to monopolize and detain real progress. Monsanto and other agri-giants are only vainly forestalling and trying to contain the real technium for their own greed – there is nothing radical about them.
One thing I find ever entertaining about conspiracy theories is the tendency to get their ideas from Hollywood movies, while at the same time refuting the movies as an example of brainwashing and propaganda. Apparently, despite all their warnings to people not to be influenced by media, conspiracy theorists are incapable of noticing how impressionable and easily pressured they themselves are.
The book even attacks Darwinian evolution and natural selection, seeing a sinister agenda in them (p. 179–180), which adds to the book’s already deep anti-science message. He connects the theory of evolution with the destructive idea of social Darwinism, and with transhumanism in turn (p. 190–191). The elite plan to “bring society down to the level of beast” by encouraging such social Darwinism, Estulin alleges (p. 211–219).
Bizarre speculated connections between Malthusian theories, Darwin, the British Empire, eugenics and ultimately transhumanism (p. 174–178) do not take note of the fact that transhumanists and technoprogressives are the one camp in the world most opposed to Malthusianism. Technoprogressives are the camp with the most faith in the idea that the entire world can be fed and sustained. No-one has more faith in the infinite resources of humanity and the ability to meet everyone’s needs than the technoprogressives.
Perhaps reflecting the book’s confusion, Chapter 1 is dedicated to asserting that the “elite” will reduce everyone to a primitive and chaotic setting, whereas Chapter 2 onwards alleges that the plan is a high-tech dystopia. These two polar opposite conspiracies do not coincide in any way, as do the paradoxical claims that transhuman technologies are never going to be seen by the world’s poor, yet are also going to be forced on the whole of humanity.
The coverage of transhumanism and understanding of it in this book is not positive (to put it politely). It fails to take account of transhumanism’s real basis as a movement exploring emerging trends to change humanity for the better. Instead, it simply exaggerates marginal influences by futurism, popular science and technology enthusiasm on governments and business elites as representing a global conspiracy.
A more informative theory about the relationship of the “elite” towards transhumanism would instead explore the habit of ignorant opposition by Neoconservatives, warmongers, and the mainstream media towards international peace, development, science, education, web freedom, and ultimately transhumanism.

By Harry J. BenthamMore articles by Harry J. Bentham

Originally published on 20 May 2014 at h+ Magazine

We have nothing to fear from exponential technological change, which will deliver humanity from statism and oligarchy. Send us your email address to get more ClubOfINFO articles delivered for free.