Toggle light / dark theme
The Garden of Earthly Delights, closed, H. Bosch

Right after the Big Bang, in the Planck epoch, the Universe occupied a space region with a radius of 1.4 x 10-13 cm – remarkably, equal to the fundamental length characterizing elementary particles. Analogue to the way nearly all cells contain the DNA information required to build the entire organism, every region the size of an elementary particle had then the energy necessary for the Universe’s creation.

As the Universe cooled down, electrons and quarks were the first to appear, the latter forming protons and neutrons, combining into nuclei in a mere matter of minutes. During its expansion, processes started happening slower and slower: it took 380,000 years for electrons to start orbiting around the nuclei, and 100 million years for hydrogen and helium to form the first stars. Even more, it wasn’t until 4.5 billion years ago that our young Earth was born, with its oceans emerging shortly after, and the first microbes to call them home for the first time. Life took over our planet in what seems, on the scale of the Universe, a sheer instant, and turned this world into its playground. There came butterflies and tricked the non-existence of natural blue pigment by creating Christmas tree-shaped nanometric structures in their wings to reflect blue’s wavelength only; fireflies and lanternfish which use the chemical reaction between oxygen and luciferin for bioluminescence; and it all goes all the way up to the butterfly effect leading to the unpredictability of the weather forecasts, commonly known as the reason why a pair of wings flapping in Brazil can lead to a typhoon in Texas. The world as we know it now developed slowly, and with the help of continuous evolution and natural selection, the first humans came to life.

Without any doubt, we are the earthly species never ceasing to surprise. We developed rationality, logic, strategic and critical thinking, yet human nature cannot be essentially defined without bringing into the equation our remarkable appetite for art and beauty. In the intricate puzzle human existence represents, this particular piece has given it valences no other known being possesses. Not all beauty is art, but many artworks both in the past, as well as today, embody some understanding of beauty.

To define is to limit, as Oscar Wilde stated, and even though we cannot establish clear definitions of art and beauty. Yet, great works of art manage to establish a strong thread between the creator and receptor. In contrast to this byproduct of human self-expression that encapsulates unique creative behaviour, beauty has existed long before our emergence as a species and isn’t bound to it in any way. It is omnipresent, a metaphorical Higgs field that can be observed by the ones who wish to open their eyes thoroughly. From the formation of Earth’s oceans and butterflies’ blue wings to Euler’s identity and rococo architecture, beauty is a subjective ubiquity. Though a question remains – why does it evoke such pleasure in our minds? What happens in our brains when we see something beautiful? The question is the subject of an entire field, named neuroaesthetics, which identified an intricate whole-brain response to artistic stimuli. As such, our puzzling reactions to art can be explained by these responses similar to “mind wandering”, involving “thoughts about the self, memory, and future”– in other words, art seems to evoke our past experiences, present conscious self, and imagination about the future. There needs to be noted that critics of the field draw attention to the superficiality and oversimplification that may characterize our attempts to view art through the lenses of neuroscience.

Withal, our fascination for art and beauty is certified by facts from immemorial times — let’s go back hundreds of thousands of years, even before language was invented. The past can prove our organic inclinations towards pleasing our senses and communicating ourselves to the world and posterity. Our ancestors felt the need to express themselves by designing exquisite quartz hand-axes, symmetrical teardrops which surpassed the pure functional purposes and represent the first artistic endeavours acknowledged. Around 100,000 years ago, the first jewellery (shell necklaces) were purposefully brought from the seashore as accessories for the early Homo sapiens in today’s Israel and Algeria. 60,000 years later, we marked the beginning of figurative art through the mammoth-ivory Löwenmensch found in today’s Germany, the oldest-known zoomorphic sculpture, half-human and half-lion. Just shortly after, we started depicting the reality of our everyday lives on cave walls: from cows, wild boars and domesticated dogs to dancing people and outlines of human hands; we told our stories the best we could, and we never stopped ever since.

We conferred the strongest of feelings to our workings, making them a powerful showcase of our minds and souls. Time gradually refined and sublimated our taste, going from Nefertiti’s bust to Johannes Vermeer’s Girl with a Pearl Earring, up to the point where Robert Ryman’s ‘Bridge’– a white-on-white painting, a true reflection of minimalism – was sold for $20.6 million. But what are we heading towards?

The future holds the enticing promise of a legacy like no other: passing the artistic capabilities to machines, the ultimate step in making them human-like. How would this be possible since real art cannot catch contour without the touch of human creativity? The emergence of computational creativity aims to prove us that designing machines exhibiting creative behaviour is, in fact, a possibility that can be achieved. The earliest remarkable attempt was AARON, a computer program generating artworks with the help of AI, with its foundations put in 1968 by Harold Cohen. It continued to be improved until 2016, but regardless of the switch between C programming language to the more artistic-friendly Lisp, it was still restricted to hard coding and could not learn on its own. A giant leap was made after Generative Adversarial Networks (GANs), first introduced in 2014, started being used for generating art. A noteworthy example is AICAN, “the first and only AI artist trained on 80,000 of the greatest works in art history”, its artworks having been exhibited in major New York galleries and dropping as NFTs in 2021. It is complemented by AIs that experiment with fragrances and flavours (such as the ones designed by IBM), or compose emotional soundtrack music (see AIVA). The artistic community allowed for other countless tasks to be taken over by AIs; take ArtPI, an API optimized for visual searching based on style, color, light, composition, genre and other characteristics. The world seeks to improve whatever can be improved, technology mimicking whatever can be mimicked, never seeming to run out of options and ideas.

For an indefinite period of time, we will continue to assimilate and replicate the world’s astonishing beauty, transposing it into art and eventually passing it on to machines. This idea of continuity is deeply rooted in human nature, giving us hope for the much-yearned transcendence: we want to feel that we can overcome our transience, loneliness, fears, and limitations. And art is here, for humans and posthumans alike, to serve this purpose for as long as we need it and yield beauty as never seen before.

According to a French physiologist, humans have reached the peak of our height, lifespan and physical fitness.

I suspect that from our vantage point (a narrow snapshot of human evolution), we lack sufficient data to arrive this sweeping conclusion. Nevertheless, mainstream media is taking this research seriously.

“He is not here; He has risen,” — Matthew 28:6

As billions of Christians around the world are getting ready to celebrate the Easter festival and holiday, we take pause to appreciate the awe inspiring phenomena of resurrection.


In religious and mythological contexts, in both Western and Eastern societies, well known and less common names appear, such as Attis, Dionysus, Ganesha, Krishna, Lemminkainen, Odin, Osiris, Persephone, Quetzalcoatl, and Tammuz, all of whom were reborn again in the spark of the divine.

In the natural world, other names emerge, which are more ancient and less familiar, but equally fascinating, such as Deinococcus radiodurans, Turritopsis nutricula, and Milnesium tardigradum, all of whose abilities to rise from the ashes of death, or turn back time to start life again, are only beginning to be fully appreciated by the scientific world.


In the current era, from an information technology centric angle, proponents of a technological singularity and transhumanism, are placing bets on artificial intelligence, virtual reality, wearable devices, and other non-biological methods to create a future connecting humans to the digital world.

This Silicon Valley, “electronic resurrection” model has caused extensive deliberation, and various factions to form, from those minds that feel we should slow down and understand the deeper implications of a post-biologic state (Elon Musk, Steven Hawking, Bill Gates, the Vatican), to those that are steaming full speed ahead (Ray Kurzweil / Google) betting that humans will shortly be able to “transcend the limitations of biology”.


However, deferring an in-depth Skynet / Matrix discussion for now, is this debate clouding other possibilities that we have forgotten about, or may not have even yet fully considered?

Today, we find ourselves at an interesting point in history where the disciplines of regenerative sciences, evolutionary medicine, and complex systems biology, are converging to give us an understanding of the cycle of life and death, orders of magnitude more complex than only a few years ago.

In addition to the aforementioned species that are capable of biologic reanimation and turning back time, we show no less respect for those who possess other superhuman capabilities, such as magnetoreception, electrosensing, infrared imaging, and ultrasound detection, all of which nature has been optimizing over hundreds of millions of years, and which provide important clues to the untapped possibilities that currently exist in direct biological interfaces with the physical fabric of the universe.


The biologic information processing occurring in related aneural organisms and multicellular colony aggregators, is no less fascinating, and potentially challenges the notion of the brain as the sole repository of long-term encoded information.

Additionally, studies on memory following the destruction all, or significant parts of the brain, in regenerative organisms such as planarians, amphibians, metamorphic insects, and small hibernating mammals, have wide ranging implications for our understanding of consciousness, as well as to the centuries long debate between the materialists and dualists, as to whether we should focus our attention “in here”, or “out there”.

I am not opposed to studying either path, but I feel that we have the potential to learn a lot more about the topic of “out there” in the very near future.


The study of brain death in human beings, and the application of novel tools for neuro-regeneration and neuro-reanimation, for the first time offer us amazing opportunities to start from a clean slate, and answer questions that have long remained unanswered, as well as uncover a knowledge set previously thought unreachable.

Aside from a myriad of applications towards the range of degenerative CNS indications, as well as disorders of consciousness, such work will allow us to open a new chapter related to many other esoteric topics that have baffled the scientific community for years, and fallen into the realm of obscure curiosities.


From the well documented phenomena of terminal lucidity in end stage Alzheimer’s patients, to the mysteries of induced savant syndrome, to more arcane topics, such as the thousands of cases of children who claim to remember previous lives, by studying death, and subsequently the “biotechnological resurrection” of life, we can for the first time peak through the window, and offer a whole new knowledge base related to our place, and our interaction, with the very structure of reality.

We are entering a very exciting era of discovery and exploration.


About the author

Ira S. Pastor is the Chief Executive Officer of Bioquark Inc. (, an innovative life sciences company focusing on developing novel biologic solutions for human regeneration, repair, and rejuvenation. He is also on the board of the Reanima Project (

According to the reputable Australian astro-enthusiast journal, SkyNews, a leading biologist says that it is surprising we have not already discovered extra-terrestrials that look like us — given the growing number of Earth-like planets now discovered by astronomers.

Planet_moonSimon Conway Morris, an evolutionary biologist suggests that aliens resembling humans must have evolved on other planets. He bases the claim on evidence that different species will independently develop similar features which means that life similar to that on Earth would also develop on equivalent planets.

The theory, known as convergence, says evolution is a predictable process which follows a rigid set of rules. Read the full story at Skynews

Philip Raymond is Co-Chair of The Cryptocurrency Standards
Association [] and chief editor at

Originally published at h+ Magazine

Ray Kurzweil’s well-received book, The Singularity is Near, is perhaps the best known book related to transhumanism and presents a view of inevitable technological evolution that closely resembles the claim in the later (2010) book What Technology Wants by Wired co-founder Kevin Kelly.

Kurzweil describes six epochs in the history of information. Each significant form of information is superseded by another in a series of stepping stones, exposing a universal will at work within technology towards extropy (this is seen by Kevin Kelly as intelligence and complexity attaining their maximum state possible). The first epoch is physics and chemistry, and is succeeded by biology, brains, technology, the merger of technology and human intelligence and finally the epoch in which the universe “wakes up”. The final epoch achieves what could be called godhood for the universe’s surviving intelligences (p. 15).

Artificial intelligence, which Kurzweil predicts to compete with and soon after overtake the human brain, will mean reverse-engineering the human brain as a direct offshoot of developing higher resolution when scanning the brain (much as genome synthesis was the offshoot of being able to sequence a complete genome) (p. 25–29, 111–198). This is a source of particular excitement to many, because of Kurzweil and Google’s genuine efforts to make it a reality.

An interest in abundance and a read of J. Craig Venter’s Life at the Speed of Light will make Chapter 5 of Kurzweil’s book of particular interest, as it discusses genetics and its relationship to the singularity. Genetics, nanotechnology and robotics are seen as overlapping revolutions that are set to characterize the first half of the Twenty-First Century (p.205). Kurzweil addresses the full understanding of genetics, e.g. knowing exactly how to program and hack our DNA as in J. Craig Venter’s synthetic biology revolution (p. 205–212).

Kurzweil predicts “radical life-extension” on top of the elimination of disease and expansion of human potential through the genetics advancements of teams like J. Craig Venter’s. J. Craig Venter covered life extension and human enhancement in his 2013 book, but also drew special attention to the ongoing engineering of beneficial microbes for purposes of making renewable resources and cleaning the environment. Another prospect for abundance noted by Kurzweil is the idea of cloning meat and other protein sources in a factory (this being an offshoot of medical cloning advances). Far from simply offering life extension to the privileged few, Kurzweil notes that such a development may have the potential to solve world hunger.

To cover the nanotechnology revolution, Kurzweil visits nanotechnology father K. Eric Drexler’s assessments of the pros and cons in this field. In some ways, Kurzweil could be faulted for expecting too much from nanotechnology, since his treatment of the subject contrasts sharply with Drexler’s characterization of it as simply being “atomically precise manufacturing” (APM) and primarily having industrial ramifications. In Radical Abundance, Drexler specifically discourages the view echoed by Kurzweil of “nanobots” swimming in our body in the near future and delivering miracle cures, seeing such expectations as the product of sci-fi stories and media hype.

On the subject of artificial intelligence, there can be no doubt that Kurzweil is ahead of all of us because of his personal background. In his estimate, artificial intelligence reverse-engineered from the human brain will immediately “exceed human intelligence” for a number of reasons even if we only design it to be on par with our intelligence. For example, computers are able to “pool their resources in ways that humans cannot” (p. 259–298). In addition, Kurzweil forecasts:

The advent of strong AI is the most important transformation this century will see. Indeed, it is comparable in importance to the advent of biology itself. It will mean the creation of biology that has finally mastered its own intelligence and discovered means to overcome its limitations. (p. 296)

From our viewpoint in 2014, some of Kurzweil’s predictions could be criticized for being too optimistic. For example, “computers arriving at the beginning of the next decade will become essentially invisible, woven into our clothing, embedded in our furniture and environment”, as well as providing unlimited Wi-Fi everywhere (p. 312). While no doubt some places and instruments exist that might fit this description, they are certainly not in widespread use at this time, nor is there any particular need among society for this to become widespread (except perhaps the Wi-Fi).

Another likely over-optimistic prediction is the view that “full-immersion virtual reality” will be ready for our use by the late 2020s and it will be “indistinguishable from reality” (p. 341). In Kurzweil’s prediction, by 2029 nanobots in our bodies will be able to hack our nervous systems and trick us into believing a false reality every bit as convincing as the life we knew. We are in 2014. There is no full-immersion virtual reality system based on nanotechnology set to be on the market in 2020. A few dedicated gamers have the Oculus Rift (of which there will no doubt be a constant stream of successors ever reducing weight, trying to look “sexier”, and expanding the resolution and frame-rate over at least one decade), while there is no sign whatsoever of the nanotechnology-based neural interface technology predicted by Kurzweil. If nanotech-based full-immersion virtual reality is going to be possible in the 2020s at all, there ought to at least be some rudimentary prototype already in development, but (unless it is a secret military project) time is running out for the prediction to come true.

Part of the book addresses the exciting possibilities of advanced, futuristic warfare. The idea of soldiers who operate robotic platforms, aided by swarms of drones and focused on disrupting the enemy ability to communicate is truly compelling – all the more so because of the unique inside view that Kurzweil had of DARPA. Kurzweil sees a form of warfare in which commanders engage one another in virtual and physical battlefields from opposite sides of the globe, experiencing conflicts in which cyber-attack and communication disruption are every bit as crippling to armies as physical destruction (p. 330–335). Then again, this trend (like the idea of building missile-defense shields) may ultimately lead to complacency and false assumptions that our security is “complete”, while that foreign suppliers like Russia and China are also modernizing and have many systems that are thought to be on par with the US. A lot of US military success may be down to picking on vulnerable countries, rather than perfecting a safe and clean form of warfare (most of Saddam’s deadliest weapons were destroyed or used up in the First Gulf War, which alone could account for the US having so few casualties in the 2003 war.)

Although saying that the singularity will eliminate the distinction between work and play by making information so easily accessible in our lives, Kurzweil predicts that information will gain more value, making intellectual property more important to protect (p. 339–340). This sentiment is hard to agree with at a time when piracy and (illegally) streaming video without paying is already increasingly a fact of everyone’s life. If all thought and play is going to qualify as a creative act as a result of our eventual integration with machines, it only becomes ever harder to believe that such creative acts are going to need monetary incentives.

The book discusses at length how to balance the risks and benefits of emerging technologies. Of particular resilience is Kurzweil’s view that relinquishing or restraining developments can itself expose us to existential risks (e.g. asteroids). I myself would take this argument further. Failing to create abundance when one has the ability to do so is negligent, and even more morally questionable than triggering a nanotech or biotech disaster that must be overcome in the course of helping people.

Kurzweil goes through what seems like an exhaustive list of criticisms, arming singularitarians with an effective defense of their position. Of interest to me, as a result of penning a response to it myself, was how Kurzweil rebuts the “Criticism from the Rich-Poor Divide” by arguing that poverty is overwhelmingly being reduced and benefits of digital technology for the poor are undeniable. Indeed, among the world’s poor, there is no doubt that digital technology is good and that it empowers people. Anyone who argues this revolution is bad for the poor are just plainly ignoring the opinions of the actual poor people they claim to be defending. There has been no credible connection between digital technology and the supply of disproportionate benefits to wealthy elites. If anything, digital technology has made the world more equal and can even be regarded as part of a global liberation struggle.

Unfortunately, there is a major argument absent from the book. Kurzweil’s book precedes the revelations of mass surveillance by NSA whistleblower Edward Snowden. As a result, it fails to answer the most important criticism of an imminent singularity I can think of. I would have to call this the “Argument from Civil and Political Rights”. It takes into account the fact that greedy and cruel nation-states (the US being the most dangerous) tend to seek the monopoly of power in the current world order, including technological power. By bridging the gap between ourselves and computers before we create a more benevolent political and social order with less hegemony and less cruelty, we will simply be turning every fiber of our existence over to state agencies and giving up our liberty.

Suppose PRISM or some program like it exists, and my mind can be read by it. In that case, my uploaded existence would be no different from a Gitmo detainee. In fact, just interfacing with such a system for a moment would be equivalent to being sent to Gitmo, if the US government and its agencies exist. It does not matter how benevolent the operators even are. The fact that I am vulnerable to the operators means I am being subjected to a constant and ongoing violation of my civil rights. I could be subjected to any form of cruelty or oppression, and the perpetrator would never be stopped or held accountable.

It gets worse. With reality and virtual reality becoming indistinguishable (as predicted in this book), a new sort of sadist may even emerge that does not know the difference between the two or does not care. History has shown that such sadists are most likely to be the ones who have had more experience with and thus have obtained more power over the system. It is this political or social concern that should be deterring people from uploading themselves right now. If we were uploaded, what followed could never evolve beyond being a constant reflection of the flawed social order at the time when the upload occurred. Do we want to immortalize an abusive and cruel superpower, corporate lobbyists, secret police, or a prison? Are these things actually worth saving for all eternity and disseminating across the universe when we reach the singularity?

Despite the questions I have tried to raise in this review, I am still convinced by the broad idea of the singularity, and Kurzweil articulates it well. The idea, as promoted by Max More and quoted by Kurzweil (p. 373) that our view of our role in the universe should be like Nietzche’s “rope over an abyss” trying to reach for a greater existence, with technology playing a key role, helps encourage us to take noble risks. However, I believe the noble risks are not risks taken out of desperation to extend our lives and escape death, or risks taken to make ourselves look nice or something else petty. Noble risks are taken to ensure our future or the future of humanity, often at the expense of the present.

I would discourage people from trying to hasten the singularity because of a personal fear of their own death, as this would probably lead to irrational behavior (as occurs with the traditions that promote transcending death by supernatural means). Complications from society and unforeseen abuses, especially by our deeply paranoid and controlling states that are far too primitive to react responsibly to the singularity, are likely to slow everything down.


Editors note: concerns about virtual imprisonment or torture are not entirely unfounded, see for example this older article as well as this recent development.

The Lifeboat community doesn’t need me to tell them that a growing number of scientists are dedicating their time and energy into research that could radically alter the human aging trajectory. As a result we could be on the verge of the end of aging. But from an anthropological and evolutionary perspective, humans have always had the desire to end aging. Most human culture groups on the planet did this by inventing some belief structure incorporating eternal consciousness. In my mind this is a logical consequence of A) realizing you are going to die and B) not knowing how to prevent that tragedy. So from that perspective, I wanted to create a video that contextualized the modern scientific belief in radical life extension with the religious/mythological beliefs of our ancestors.

And if you loved the video, please consider subscribing to The Advanced Apes on YouTube! I’ll be releasing a new video bi-weekly!


Originally posted via The Advanced Apes

Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.

What does this mean for our lives in the 21st century?

First, we must be prepared to accept that the 21st century will not be anything like the 20th. All too often I encounter people who extrapolate expected change for the 21st century that mirrors the pace of change humanity experienced in the 20th. This will simply not be the case. Just as cosmologists are well aware of the bizarre increased acceleration of the expansion of the universe; so evolutionary theorists are well aware of the increased pace of techno-cultural change. This acceleration shows no signs of slowing down; and few models that incorporate technological evolution predict that it will.

The result of this increased pace of change will likely not just be quantitative. The change will be qualitative as well. This means that communication and transportation capabilities will not just become faster. They will become meaningfully different in a way that would be difficult for contemporary humans to understand. And it is in the strange world of qualitative evolutionary change that I will focus on two major processes currently predicted to occur by most futurists.

Qualitative evolutionary change produces interesting differences in experience. Often times this change is referred to as a “metasystem transition”. A metasystem transition occurs when a group of subsystems coordinate their goals and intents in order to solve more problems than the constituent systems. There have been a few notable metasystem transitions in the history of biological evolution:

  • Transition from non-life to life
  • Transition from single-celled life to multi-celled life
  • Transition from decentralized nervous system to centralized brains
  • Transition from communication to complex language and self-awareness

All these transitions share the characteristics described of subsystems coordinating to form a larger system that solve more problems than they could do individually. All transitions increased the rate of change in the universe (i.e., reduction of entropy production). The qualitative nature of the change is important to understand, and may best be explored through a thought experiment.

Imagine you are a single-celled organism on the early Earth. You exist within a planetary network of single-celled life of considerable variety, all adapted to different primordial chemical niches. This has been the nature of the planet for well over 2 billion years. Then, some single-cells start to accumulate in denser and denser agglomerations. One of the cells comes up to you and says:

I think we are merging together. I think the remainder of our days will be spent in some larger system that we can’t really conceive. We will each become adapted for a different specific purpose to aid the new higher collective.

Surely that cell would be seen as deranged. Yet, as the agglomerations of single-cells became denser, formerly autonomous individual cells start to rely more and more on each other to exploit previously unattainable resources. As the process accelerates this integrated network forms something novel, and more complex than had previously ever existed: the first multicellular organisms.

The difference between living as an autonomous single-cell is not just quantitative (i.e., being able to exploit more resources) but also qualitative (i.e., shift from complete autonomy to being one small part of an integrated whole). Such a shift is difficult to conceive of before it actually becomes a new normative layer of complexity within the universe.

Another example of such a transition that may require less imagination is the transition to complex language and self-awareness. Language is certainly the most important phenomena that separates our species from the rest of the biosphere. It allows us to engage in a new evolution, technocultural evolution, which is essentially a new normative layer of complexity in the universe as well. For this transition, the qualitative leap is also important to understand. If you were an australopithecine, your mode of communication would not necessarily be that much more efficient than that of any modern day great ape. Like all other organisms, your mind would be essentially isolated. Your deepest thoughts, feelings, and emotions could not fully be expressed and understood by other minds within your species. Furthermore, an entire range of thought would be completely unimaginable to you. Anything abstract would not be communicable. You could communicate that you were hungry; but you could not communicate about what you thought of particular foods (for example). Language changed all that; it unleashed a new thought frontier. Not only was it now possible to exchange ideas at a faster rate, but the range of ideas that could be thought of, also increased.

And so after that digression we come to the main point: the metasystem transition of the 21st century. What will it be? There are two dominant, non-mutually exclusive, frameworks for imagining this transition: technological singularity and the global brain.

The technological singularity is essentially a point in time when the actual agent of techno-cultural change; itself changes. At the moment the modern human mind is the agent of change. But artificial intelligence is likely to emerge this century. And building a truly artificial intelligence may be the last machine we (i.e., biological humans) invent.

The second framework is the global brain. The global brain is the idea that a collective planetary intelligence is emerging from the Internet, created by increasingly dense information pathways. This would essentially give the Earth an actual sensing centralized nervous system, and its evolution would mirror, in a sense, the evolution of the brain in organisms, and the development of higher-level consciousness in modern humans.

In a sense, both processes could be seen as the phenomena that will continue to enable trends identified by global brain theorist Francis Heylighen:

The flows of matter, energy, and information that circulate across the globe become ever larger, faster and broader in reach, thanks to increasingly powerful technologies for transport and communication, which open up ever-larger markets and forums for the exchange of goods and services.

Some view the technological singularity and global brain as competing futurist hypotheses. However, I see them as deeply symbiotic phenomena. If the metaphor of a global brain is apt, at the moment the internet forms a type of primitive and passive intelligence. However, as the internet starts to form an ever greater role in human life, and as all human minds gravitate towards communicating and interacting in this medium, the internet should start to become an intelligent mediator of human interaction. Heylighen explains how this should be achieved:

the intelligent web draws on the experience and knowledge of its users collectively, as externalized in the “trace” of preferences that they leave on the paths they have traveled.

This is essentially how the brain organizes itself, by recognizing the shapes, emotions, and movements of individual neurons, and then connecting them to communicate a “global picture”, or an individual consciousness.

The technological singularity naturally fits within this evolution. The biological human brain can only connect so deeply with the Internet. We must externalize our experience with the Internet in (increasingly small) devices like laptops, smart phones, etc. However, artificial intelligence and biological intelligence enhanced with nanotechnology could form quite a deeper connection with the Internet. Such a development could, in theory, create an all-encompassing information processing system. Our minds (largely “artificial”) would form the neurons of the system, but a decentralized order would emerge from these dynamic interactions. This would be quite analogous to the way higher-level complexity has emerged in the past.

So what does this mean for you? Well many futurists debate the likely timing of this transition, but there is currently a median convergence prediction of between 2040–2050. As we approach this era we should suspect many fundamental things about our current institutions to change profoundly. There will also be several new ethical issues that arise, including issues of individual privacy, and government and corporate control. All issues that deserve a separate post.

Fundamentally this also means that your consciousness and your nature will change considerably throughout this century. The thought my sound bizarre and even frightening, but only if you believe that human intelligence and nature are static and unchanging. The reality is that human intelligence and nature are an ever evolving process. The only difference in this transition is that you will actually be conscious of the evolution itself.

Consciousness has never experienced a metasystem transition (since the last metasystem transition was towards higher-level consciousness!). So in a sense, a post-human world can still include your consciousness. It will just be a new and different consciousness. I think it is best to think about it as the emergence of something new and more complex, as opposed to the death or end of something. For the first time, evolution will have woken up.

Earth is a hostile place — and that’s even before one starts attending school. Even when life first sparked into being, it had to evolve defenses to deal with a number of toxins, such as damaging ultraviolet light, then there were toxic elements ranging from iron to oxygen to overcome, later, there was DDT and other toxic chemicals and of course, there are all those dreaded cancers.

In Evolution In A Toxic World: How Life Responds To Chemical Threats [Island Press; 2012: Guardian Bookshop; Amazon UK;Amazon US], environmental toxicologist Emily Monosson outlines three billion years of evolution designed to withstand the hardships of living on this deadly planet, giving rise to processes ranging from excretion, transformation or stowing harmful substances. The subtitle erroneously suggests these toxins are only chemical in nature, but the author actually discusses more than this one subclass of toxins.

The method that arose to deal with these toxins is a plethora of specialised, targeted proteins — enzymes that capture toxins and repair their damages. By following the origin and progression of these shared enzymes that evolved to deal with specific toxins, the author traces their history from the first bacteria-like organisms to modern humans. Comparing the new field evolutionary toxicology to biomedical research, Dr Monosson notes: “In light of evolution, biomedical researchers are now asking questions that might seem antithetical to medicine”.

Continue reading “Evolution in a Toxic World”