Toggle light / dark theme

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.


The Lifeboat community doesn’t need me to tell them that a growing number of scientists are dedicating their time and energy into research that could radically alter the human aging trajectory. As a result we could be on the verge of the end of aging. But from an anthropological and evolutionary perspective, humans have always had the desire to end aging. Most human culture groups on the planet did this by inventing some belief structure incorporating eternal consciousness. In my mind this is a logical consequence of A) realizing you are going to die and B) not knowing how to prevent that tragedy. So from that perspective, I wanted to create a video that contextualized the modern scientific belief in radical life extension with the religious/mythological beliefs of our ancestors.

And if you loved the video, please consider subscribing to The Advanced Apes on YouTube! I’ll be releasing a new video bi-weekly!


“…and on the third day he rose again…”

If we approach the subject from a non theist point of view, what we have is a re-boot. A restore of a previously working “system image”. Can we restore a person to the last known working state prior to system failure?

As our Biological (analog) life get’s more entwined with the Digital world we have created, chances are, there might be options worth exploring. It all comes down to “Sampling” — taking snapshots of our analog lives and storing them digitally. Today, with reasonable precision we can sample, store and re-create most of our primary senses, digitally. Sight via cameras, sound via microphones, touch via haptics and even scents can be sampled and/or synthesized with remarkable accuracy.


Life as Routines, Sub-routines and Libraries:

In the story “Memories with Maya”, Krish the AI researcher put forward in simple language, some of his theories to the main character, Dan:

“Humans are creatures of habit,” he said. “We live our lives following the same routine day after day. We do the things we do with one primary motivation–comfort.”
“That’s not entirely true,” I said. “What about random acts. Haven’t you done something crazy or on impulse?”
“Even randomness is within a set of parameters; thresholds,” he said.

If we look at it, the average person’s week can be broken down to typical activities per day and a branch out for the weekend. The day can be further broken down into time-of-day routines. Essentially, what we have are sub-routines, routines and libraries that are run in an infinite loop, until wear and tear on mechanical parts leads to sector failures. Viruses also thrown into the mix for good measure.

Remember: we are looking at the typical lives of a good section of society — those who have resigned their minds to accepting life as it comes, satisfied in being able to afford creature comforts every once in a while. We aren’t looking at the outliers — the Einsteins, the Jobs the Mozarts. This is ironic, in that, it would be easier to back-up, restore, and resurrect the average person than it would be to do the same for outliers.


Digital Breadcrumbs — The clues we leave behind.

What exactly does social media sites mean by “What’s on your mind?” — Is it an invitation to digitize our Emotions, our thoughts, our experiences via words, pictures, sounds and videos? Every minute, Gigabytes (a conservative estimate) of analog life is being digitized and uploaded to the metaphoric “Cloud” — a rich mineral resource, ripe for data mining by “deeplearning” systems. At some point in the near future, would AI combined with technologies such as Quantum Archeology, Augmented Reality and Nano-tech, allow us to run our brains (minds?) on a substrate independent platform?

If that proposition turns your geek on, here’s some ways that you can live out a modern day version of Hansel and Gretel, insuring you find your way home, by leaving as many digital bread crumbs as you can via:

Mind Files — Terasem and Lifenaut:

What is the LifeNaut Project?

The long-term goal is to test whether given a comprehensive database, saturated with the most relevant aspects of an individual’s personality, future intelligent software will be able to replicate an individual’s consciousness. So, perhaps in the next 20 or 30 years technology will be developed to upload these files, together with futuristic software into a body of some sort – perhaps cellular, perhaps holographic, perhaps robotic. is funded by the Terasem Movement Foundation, Inc.

The LifeNaut Project is organized as a research experiment designed to test these hypotheses:

(1) a conscious analog of a person may be created by combining sufficiently detailed data about the person (“mindfile & biofile”) using future consciousness software (“mindware”), and

(2) such a conscious analog may be downloaded into a biological or nanotechnological body to provide life experiences comparable to those of a typically birthed human.

Sign-up and start creating your MindFile today.

Voice Banking:


Read about Voice Banking, Speech Reconstruction and how natural human voice can be preserved and re-constructed. Voice banking might help even in cases when there is no BSOD scenario involved.

Roger Ebert, noted film critic got his “natural” voice back, using such technology.

Hear Obama’s voice re-constructed:

Full Body Performance Capture:

Without us even knowing it, we are Transhumans at heart. Owners of the gaming console Xbox and the Kinect, have at their disposal, hardware that until just a couple of years ago, was only within reach of large corporations and Hollywood studios. Motion Capture, Laser scanning, full body 3D models and performance capture was not accessible to lay-people.

Today, this technology can contribute toward backup and Digital resurrection. A performance capture session can encode digitally, the essence of a persons gait, the way they walk, pout, and express themselves — A person’s unique Digital Signature. The next video shows this.

“It was easy to create a frame for him, Dan,” he said. “In the time that the cancer was eating away at him, the day’s routine became more predictable.

At first he would still go to work, then come home and spend time with us. Then he couldn’t go anymore and he was at home all day.

I knew his routine so well it took me 15 minutes to feed it in. There was no need for any random branches.”

A performance capture file, could also be stored as part of a MindFile. LifeNaut and other cryonic service providers could benefit from such invaluable data when re-booting a person.

“And sometimes when we touch”:

Perhaps one of the most difficult of our senses to recreate, is that of touch. Science is already making giant strides in this area, and looking at it from a more human perspective, touch is one of the more direct and cherished sensations that defines humanity. Touch can convey emotion.

…That’s the point of this kind of technology – giving people their humanity back. You could argue that a person is no less of a human after losing a limb, but those who suffer through it would likely tell you that there is a feeling of loss. Getting that back may be physically gratifying, but it’s probably even more psychologically gratifying… — Nigel Ackland- on his bebionic arm.

If a person’s unique “touch” signature can be digitized, every nuance can be forever preserved…both for the benefit of the owner of the file, and to their loved ones, experiencing and remembering shared intimate moments.


‘Let there be light,’ said the Cgi-God, and there was light…and God Rays.

We were out in the desert; barren land, and our wish was that it be transformed into a green oasis; a tropical paradise.

And so our demigods went to work in their digital sand-boxes.
Then, one of the Cgi-Gods populated the land with Dirrogates –Digital people in her own likeness.

Welcome to the world… created in Real-time.

A whole generation of people are growing up in such virtual worlds, accustomed to travelling across miles and miles of photo-realistic terrain on their gaming rigs. An entire generation of Transhumans evolving (perhaps even un-known to them). With each passing year, hardware and software under the command of human intelligence, gets even closer to simulating the real-world, down to physics, caustics and other phenomena exclusive to the planet Earth. How is all this voodoo being done?

Enter –the Game Engine.

All output in the video above is in real-time and from a single modern gaming PC. That’s right…in case you missed it, all of the visuals were generated in real-time from a single PC that can sit on a desk. The “engine” behind it, is the CryEngine 3. A far more customized and amped up version of this technology called Cinebox is a dedicated offering aimed at Cinematography. It will have tools and functions that film makers are familiar with. It is these advances in technology… these tools that film-makers will use, that will acclimatize us to the virtual world they build with human performance capture and digital assets; laser scanned pointclouds of real-world architecture… this is the technology that will play its part and segue us into Transhumanism, rather than a radical crusade that will “convert” humanity to the movement.

  • Mind Uploads need a World to roam in:
Laser scanned buildings and even whole neighborhood blocks are now common place in large budget Hollywood productions. A detailed point cloud needs massive compute power to render. Highend Game Engines when daisy chained can render and simulate these large neighborhoods with realtime animated atmosphere, and populate the land with photo-realistic flora and fauna. Lest we forget… in stereoscopic 3D, for full immersion of our visual cortex.

  • Real World Synced Weather:
Game Engines have powerful and advanced TOD (time of day) editors. Now imagine if a TOD editor module and a weather system could pull data such as wind direction, temperature and weather conditions from real-world sensors, or a real-time data source.
If this could be done, then the augmented world running on the Game Engine could have details such as leaves blowing in the correct direction. See the video above at around the 0.42 seconds mark for a feeler of what I’m aiming for.
Also: The stars would all align and there would be no possible errors in the night sky, of the virtual with the real, though there would be nothing stopping “God” from introducing a blue moon in the sky.
At around the 0:20 second mark, the video above shows one of the “Demi-Gods” at work: populating Virtual Earth with exotic trees and forests… mind-candy to keep an uploaded mind from home-sickness. As Transhumans, either as full mind uploads or as augmented humans with bio-mechanical enhancements or indeed, even as naturals, it is expected that we will augment the real world with our dreams of a tropical paradise — Heaven, can indeed be a place on Earth.

We were tired of our mundane lives in an un-augmented biosphere. As Transhumans, some of us booted up our mind-uploads while yet others ventured out into the desert of the real world in temperature regulated nano-clothing, experiencing a tropical paradise… even as the “naturals” would deny it’s very existence.

Recently, scientists have said we may really be living in a simulation after all. The Mayans stopped counting time not because they predicted Winter Solstice 2012 would be the end of the world… but it might be because they saw 2013 heralding the dawn of a new era. An era that sees the building blocks come into place for a journey heading into eventual…‘Singularity

Dir·ro·gate : A portmanteau of Digital + Surrogate. Borrowed from the novel “Memories with Maya
Authors note: All images, videos and products mentioned are copyright to their respective owners and brands and there is no implied connection between the brands and Transhumanism.

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.


Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.


Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5


[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.


Transhumanism is all about the creative and ethical use of technology to better the human condition. Futurists, when discussing topics related to transhumanism, tend to look at nano-tech, bio-mechanical augmentation and related technology that, for the most part, is beyond the comprehension of lay-people.

If Transhumanism as a movement is to succeed, we have to explain it’s goals and benefits to humanity by addressing the common-man. After all, transhumanism is not the exclusive domain, nor restricted to the literati, academia or the rich. The more the common man realizes that (s)he is indeed already transhuman in a way — the lesser the taboo associated with the movement and the faster the law of accelerating returns will kick in, leading to eventual Tech Singularity.

Augmented Reality Visors: Enabling Transhumanism.

At the moment, Google Glass is not exactly within reach of the common man, even if he want’s to pay for it. It is “invite only”, which may be counter productive to furthering the Transhumanism cause. Now, to be fair, it may be because the device is still in beta testing, and once any bugs have been ironed out, the general public will benefit from both, a price drop and accessibility, because if Google does not do it, China will.

Google Glass: A Transhumanist’s Swiss Knife

Glass is the very definition of an augmented human, at least until the time hi-tech replaceable eye-balls, or non-obtrusive human augmentation technology becomes common-place. Glass is a good attempt at a wearable computing device that is practical. While it does lend a cyborg look to it’s wearer, future iterations will no doubt, bring the “Matrix” look back into vogue.


Above: Companies such as Vuzix, already have advanced AR capable visors that are aesthetically pleasing to look at and wear. So how is Google Glass (and similar AR visors) the Swiss Army Knife of Transhumanists?

Augmented Human Memory:

Wearing glass, allows a Transhuman to offload his/her memory to glass storage. Everyday examples follow…

- Park your car in a multi-story parking lot, step back a few steps and blink your eyes once. Instant photographic memory of the location via Glass’s wink activated snapshot feature.

- Visited a place once and don’t remember? Call on your “expanded” memory to playback a video recording of the path taken, or display a GPS powered visual overlay in your field of view.

Previously, this was done via a cellphone. In both cases, one is already Transhuman. This is what the common man needs to be made aware of and there will be less of a stigma attached to: The World’s most dangerous idea.

Life Saver — “Glass Angel”

Every one is said to have a “Guardian Angel” watching over them, yet an app named “Glass Angel” would be an apt name for a collection of potentially life saving modules that could run on Google Glass.

- CPR Assist: How many people can honestly say they know CPR? or even the Heimlich maneuver? Crucial moments can be saved when access to such knowledge is available… while freeing up our hands to assist the person in distress. In future iterations of Google glass (glass v2.0?) if true augmented reality capability is provided, a CGI human skeleton can be overlaid on the live patient, giving visual cues to further assist in such kinds of situations.

- Driver Safety: There are some states in the US looking at banning the use of Google Glass while driving. Yet, it is interesting to note that Glass could be that Guardian Angel watching over a driver who might nod off at the wheel after a long day at work. (DUI is not an excuse however to use Glass). The various sensors can monitor for tilt of head and sound an alarm, or even recognize unusual behavior of the wearer by analyzing and tracking the live video feed coming in through the camera…sounding an alarm to warn the wearer.

Augmented Intelligence — or — Amplified Intelligence:

How many times a day do we rely on auto-spell or Google’s auto correct to pop up and say “Did you mean” to warn us of spelling errors or even context errors? How many times have we blindly trusted Google to go ahead and auto-correct for us? While it can be argued that dependence on technology is actually dulling our brains, it is an un-arguable fact that over the coming years, grammar, multi-lingual communication, and more advanced forms of intelligence augmentation will make technology such as Google Glass and it’s successors, indispensable.

Possibly, the Singularity is not all that far away… If the Singularity is the point when Technology overtakes human intelligence… I see it as the point when human intelligence regresses to meet technology, mid-way.

On a more serious note: Should we be alarmed at our increasing dependence on Augmented Intelligence? or should we think of it as simply a storage and retrieval system.

If a lecturer on-stage, addressing a gathering of intellectuals, uses his eyeballs to scroll up a list of synonyms in real time on his Google glass display, to use a more succinct word or phrase when making a point, does that make him sound more intelligent?… how about if he punctuates the point by calling up the german translation of the phrase?

These are questions that I leave open to you…


Digital Bread Crumbs- Quantum Archeology and Immortality.

Every time we share a photo, a thought… an emotion as a status update: we are converting a biological function into a digital one. We are digitizing our analog stream-of-consciousness.

These Digital Breadcrumbs that we leave behind, will be mined by “deep learning” algorithms, feeding necessary data that will drive Quantum Archeology processes… that may one day soon, resurrect us — Digital Resurrection. This might sound like a Transhumanist’s Hansel and Gretel fairy-tale… but not for long.

How does Google Glass fit in? It’s the device that will accelerate the creation of Digital Bread-crumbs. I’d saved this most radical idea for last: Digital Resurrection.

Glass is already generating these BreadCrumbs — transhumanizing the first round of beta testers wearing the device.

The next version of Google Glass, if it features true see-through Augmented Reality support, or indeed a visor from a Google competitor, will allow us to see and interact with these Digital Surrogates of immortal beings. It’s described with plausible hard science to back it up, in Chapter 6 of Memories with Maya — The Dirrogate.

I’d like to end this essay by opening it up to wiki like input from you. What ideas can you come up with to make Google Glass a swiss army knife for Transhumanists?

(This article was originally posted on the Science behind the story section on

Memories_with_maya_dystopia_Dirrogate_small front_cover_Mwm

Of the two images above, as a typical Science Fiction reader, which would you gravitate towards? In designing the cover for my book I ran about 80 iterations of 14 unique designs through a group of beta readers, and the majority chose the one with the Green tint. (design credit: Dmggzz)

No one could come up with a satisfying reason on why they preferred it over the other, except that it “looked more sci-fi” I settled for the design on the right, though it was a very hard decision to make. I was throwing away one of the biggest draws to a book — An inviting Dystopian book cover.

As an Author (and not a scientist) myself, I’ve noticed that scifi readers seem to want dystopian fiction –exclusively. A quick glance at reader preferences in scifi on sites such as GoodReads shows this. Yet, from noticing Vampire themed fiction rule the best seller lists, and from box office blockbusters, we can assume, the common man and woman is also intrigued by Longevity and Immortality.

Why is it so hard for sci-fi fans to look to the “brighter side” of science. Look at the latest Star Trek for instance…Dystopia. Not the feel good, curiosity nurturing theme of Roddenberry. This is noted in a post by Gray Scott on the website ImmortalLife.

I guess my question is: Are there any readers or Futurology enthusiasts that crave a Utopian future in their fiction and real life, or are we descending a spiral staircase (no pun) into eventual Dystopia. In ‘The Dirrogate — Memories with Maya’, I’ve tried to (subtly) infuse the philosophy of transhumanism — technology for the betterment of humans.

At Lifeboat, the goal is ‘encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies.’ We need to reach out to the influencers of lay people, the authors, the film-makers… those that have the power to evangelize the ethos of Transhumanism and the Singularity, to paint the truth: Science and Technology advancement is for the betterment of the human race.

It would be naive to think that technology would not be abused and a Dystopia world is indeed a scary and very real threat, but my belief is: We should guide (influence?) people to harness this “fire” to nurture and defend humanity, via our literature and movies, and cut back on seeding or fueling ideas that might lead to the destruction of our species.

Your thoughts?

Technology is as Human Does

When one of the U.S. Air Force’s top future strategy guys starts dorking out on how we’ve gotta at least begin considering what to do when a progressively decaying yet apocalyptically belligerent sun begins BBQing the earth, attention is payed. See, none of the proposed solutions involve marinade or species-level acquiescence, they involve practical discussion on the necessity for super awesome technology on par with a Kardeshev Type II civilization (one that’s harnessed the energy of an entire solar system).

Because Not if, but WHEN the Earth Dies, What’s Next for Us?
Head over to Kurzweil AI and have a read of Lt. Col. Peter Garretson’s guest piece. There’s perpetuation of the species stuff, singularity stuff, transhumanism stuff, space stuff, Mind Children stuff, and plenty else to occupy those of us with borderline pathological tech obsessions.


One more step has been taken toward making whole body cryopreservation a practical reality. An understanding of the properties of water allows the temperature of the human body to be lowered without damaging cell structures.

Just as the microchip revolution was unforeseen the societal effects of suspending death have been overlooked completely.

The first successful procedure to freeze a human being and then revive that person without damage at a later date will be the most important single event in human history. When that person is revived he or she will awaken to a completely different world.

It will be a mad rush to build storage facilities for the critically ill so their lives can be saved. The very old and those in the terminal stages of disease will be rescued from imminent death. Vast resources will be turned toward the life sciences as the race to repair the effects of old age and cure disease begins. Hundreds of millions may eventually be awakened once aging is reversed. Life will become far more valuable overnight and activities such as automobile and air travel will be viewed in a new light. War will end because no one will desire to hasten the death of another human being.

It will not be immortality, just parole from the death row we all share. Get ready.

GatgetBridge is currently just a concept. It might start its life as a discussion forum, later turn into a network or an organisation and hopefully inspire a range of similar activities.

We will soon be able to use technology to make ourselves more intelligent, feel happier or change what motivates us. When the use of such technologies is banned, the nations or individuals who manage to cheat will soon lord it over their more obedient but unfortunately much dimmer fellows. When these technologies are made freely available, a few terrorists and psychopaths will use them to cause major disasters. Societies will have to find ways to spread these mind enhancement treatments quickly among the majority of their citizens, while keeping them from the few who are likely to cause harm. After a few enhancement cycles, the most capable members of such societies will all be “trustworthy” and use their skills to stabilise the system (see “All In The Mind”).

But how can we manage the transition period, the time in which these technologies are powerful enough to be abused but no social structures are yet in place to handle them? It might help to use these technologies for entertainment purposes, so that many people learn about their risks and societies can adapt (see “Should we build a trustworthiness tester for fun”). But ideally, a large, critical and well-connected group of technology users should be part of the development from the start and remain involved in every step.

To do that, these users would have to spend large amounts of money and dedicate considerable manpower. Fortunately, the basic spending and working patterns are in place: People already use a considerable part of their income to buy consumer devices such as mobile phones, tablet computers and PCs and increasingly also accessories such as blood glucose meters, EEG recorders and many others; they also spend a considerable part of their time to get familiar with these devices. Manufacturers and software developers are keen to turn any promising technology into a product and over time this will surely include most mind measuring and mind enhancement technologies. But for some critical technologies this time might be too long. GadgetBridge is there to shorten it as follows:

- GadgetBridge spreads its philosophy — that mind-enhancing technologies are only dangerous when they are allowed to develop in isolation — that spreading these technologies makes a freer world more likely — and that playing with innovative consumer gadgets is therefore not just fun but also serves a good cause.

- Contributors make suggestions for new consumer devices based on the latest brain research and their personal experiences. Many people have innovative ideas but few are in a position to exploit them. Contributors rather donate their ideas that see them wither away or claimed by somebody else.

- All ideas are immediately published and offered free of charge to anyone who wants to use them. Companies select and implement the best options. Users buy their products and gain hands-on experience with the latest mind measurement and mind enhancement technologies. When risks become obvious, concerned users and governments look for ways to cope with them before they get out of hand.

- Once GadgetBridge produces results, it might attract funding from the companies that have benefited or hope to benefit from its services. GadgetBridge might then organise competitions, commission feasibility studies or develop a structure that provides modest rewards to successful contributors.

Your feedback is needed! Please be honest rather than polite: Could GadgetBridge make a difference?