The arXiv blog on MIT Technology Review recently reported a breakthrough ‘Physicists Discover the Secret of Quantum Remote Control’ [1] which led some to comment on whether this could be used as an FTL communication channel. In order to appreciate the significance of the paper on Quantum Teleportation of Dynamics [2], one should note that it has already been determined that transfer of information via a quantum tangled pair occurs *at least* 10,000 times faster than the speed of light [3]. The next big communications breakthrough?
In what could turn out to be a major breakthrough for the advancement of long-distance communications in space exploration, several problems are resolved — where if a civilization is eventually established on a star system many light years away, for example, such as on one of the recently discovered Goldilocks Zone super-Earths in the Gliese 667C star system, then communications back to people on Earth may after all be… instantaneous.
However, implications do not just stop there either. As recently reported in The Register [5], researchers in Israel at the University of Jerusalem, have established that quantum tangling can be used to send data across both TIME AND SPACE [6]. Their recent paper entitled ‘Entanglement Between Photons that have Never Coexisted’ [7] describes how photon-to-photon entanglement can be used to connect with photons in their past/future, opening up an understanding into how one may be able to engineer technology to not just communicate instantaneously across space — but across space-time.
Whilst in the past many have questioned what benefits have been gained in quantum physics research and in particular large research projects such as the LHC, it would seem that the field of quantum entanglement may be one of the big pay-offs. Whist it has yet to be categorically proven that quantum entanglement can be used as a communication channel, and the majority opinion dismisses it, one can expect much activity in quantum entanglement over the next decade. It may yet spearhead the next technological revolution.
[1] www.technologyreview.com/view/516636/physicists-discover-the-secret-of-quantum-remote-control [2] Quantum Teleportation of Dynamics http://arxiv.org/abs/1304.0319 [3] Bounding the speed of ‘spooky action at a distance’ http://arxiv.org/abs/1303.0614 [4] http://www.universetoday.com/103131/three-potentially-habitable-planets-found-orbiting-gliese-667c/ [5] The Register — Biting the hand that feeds IT — http://www.theregister.co.uk/ [6] http://www.theregister.co.uk/2013/06/03/quantum_boffins_get_spooky_with_time/ [7] Entanglement Between Photons that have Never Coexisted http://arxiv.org/abs/1209.4191
Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.
What does this mean for our lives in the 21st century?
First, we must be prepared to accept that the 21st century will not be anything like the 20th. All too often I encounter people who extrapolate expected change for the 21st century that mirrors the pace of change humanity experienced in the 20th. This will simply not be the case. Just as cosmologists are well aware of the bizarre increased acceleration of the expansion of the universe; so evolutionary theorists are well aware of the increased pace of techno-cultural change. This acceleration shows no signs of slowing down; and few models that incorporate technological evolution predict that it will.
The result of this increased pace of change will likely not just be quantitative. The change will be qualitative as well. This means that communication and transportation capabilities will not just become faster. They will become meaningfully different in a way that would be difficult for contemporary humans to understand. And it is in the strange world of qualitative evolutionary change that I will focus on two major processes currently predicted to occur by most futurists.
Qualitative evolutionary change produces interesting differences in experience. Often times this change is referred to as a “metasystem transition”. A metasystem transition occurs when a group of subsystems coordinate their goals and intents in order to solve more problems than the constituent systems. There have been a few notable metasystem transitions in the history of biological evolution:
Transition from single-celled life to multi-celled life
Transition from decentralized nervous system to centralized brains
Transition from communication to complex language and self-awareness
All these transitions share the characteristics described of subsystems coordinating to form a larger system that solve more problems than they could do individually. All transitions increased the rate of change in the universe (i.e., reduction of entropy production). The qualitative nature of the change is important to understand, and may best be explored through a thought experiment.
Imagine you are a single-celled organism on the early Earth. You exist within a planetary network of single-celled life of considerable variety, all adapted to different primordial chemical niches. This has been the nature of the planet for well over 2 billion years. Then, some single-cells start to accumulate in denser and denser agglomerations. One of the cells comes up to you and says:
I think we are merging together. I think the remainder of our days will be spent in some larger system that we can’t really conceive. We will each become adapted for a different specific purpose to aid the new higher collective.
Surely that cell would be seen as deranged. Yet, as the agglomerations of single-cells became denser, formerly autonomous individual cells start to rely more and more on each other to exploit previously unattainable resources. As the process accelerates this integrated network forms something novel, and more complex than had previously ever existed: the first multicellular organisms.
The difference between living as an autonomous single-cell is not just quantitative (i.e., being able to exploit more resources) but also qualitative (i.e., shift from complete autonomy to being one small part of an integrated whole). Such a shift is difficult to conceive of before it actually becomes a new normative layer of complexity within the universe.
Another example of such a transition that may require less imagination is the transition to complex language and self-awareness. Language is certainly the most important phenomena that separates our species from the rest of the biosphere. It allows us to engage in a new evolution, technocultural evolution, which is essentially a new normative layer of complexity in the universe as well. For this transition, the qualitative leap is also important to understand. If you were an australopithecine, your mode of communication would not necessarily be that much more efficient than that of any modern day great ape. Like all other organisms, your mind would be essentially isolated. Your deepest thoughts, feelings, and emotions could not fully be expressed and understood by other minds within your species. Furthermore, an entire range of thought would be completely unimaginable to you. Anything abstract would not be communicable. You could communicate that you were hungry; but you could not communicate about what you thought of particular foods (for example). Language changed all that; it unleashed a new thought frontier. Not only was it now possible to exchange ideas at a faster rate, but the range of ideas that could be thought of, also increased.
And so after that digression we come to the main point: the metasystem transition of the 21st century. What will it be? There are two dominant, non-mutually exclusive, frameworks for imagining this transition: technological singularity and the global brain.
The technological singularity is essentially a point in time when the actual agent of techno-cultural change; itself changes. At the moment the modern human mind is the agent of change. But artificial intelligence is likely to emerge this century. And building a truly artificial intelligence may be the last machine we (i.e., biological humans) invent.
The second framework is the global brain. The global brain is the idea that a collective planetary intelligence is emerging from the Internet, created by increasingly dense information pathways. This would essentially give the Earth an actual sensing centralized nervous system, and its evolution would mirror, in a sense, the evolution of the brain in organisms, and the development of higher-level consciousness in modern humans.
In a sense, both processes could be seen as the phenomena that will continue to enable trends identified by global brain theorist Francis Heylighen:
The flows of matter, energy, and information that circulate across the globe become ever larger, faster and broader in reach, thanks to increasingly powerful technologies for transport and communication, which open up ever-larger markets and forums for the exchange of goods and services.
Some view the technological singularity and global brain as competing futurist hypotheses. However, I see them as deeply symbiotic phenomena. If the metaphor of a global brain is apt, at the moment the internet forms a type of primitive and passive intelligence. However, as the internet starts to form an ever greater role in human life, and as all human minds gravitate towards communicating and interacting in this medium, the internet should start to become an intelligent mediator of human interaction. Heylighen explains how this should be achieved:
the intelligent web draws on the experience and knowledge of its users collectively, as externalized in the “trace” of preferences that they leave on the paths they have traveled.
This is essentially how the brain organizes itself, by recognizing the shapes, emotions, and movements of individual neurons, and then connecting them to communicate a “global picture”, or an individual consciousness.
The technological singularity naturally fits within this evolution. The biological human brain can only connect so deeply with the Internet. We must externalize our experience with the Internet in (increasingly small) devices like laptops, smart phones, etc. However, artificial intelligence and biological intelligence enhanced with nanotechnology could form quite a deeper connection with the Internet. Such a development could, in theory, create an all-encompassing information processing system. Our minds (largely “artificial”) would form the neurons of the system, but a decentralized order would emerge from these dynamic interactions. This would be quite analogous to the way higher-level complexity has emerged in the past.
So what does this mean for you? Well many futurists debate the likely timing of this transition, but there is currently a median convergence prediction of between 2040–2050. As we approach this era we should suspect many fundamental things about our current institutions to change profoundly. There will also be several new ethical issues that arise, including issues of individual privacy, and government and corporate control. All issues that deserve a separate post.
Fundamentally this also means that your consciousness and your nature will change considerably throughout this century. The thought my sound bizarre and even frightening, but only if you believe that human intelligence and nature are static and unchanging. The reality is that human intelligence and nature are an ever evolving process. The only difference in this transition is that you will actually be conscious of the evolution itself.
Consciousness has never experienced a metasystem transition (since the last metasystem transition was towards higher-level consciousness!). So in a sense, a post-human world can still include your consciousness. It will just be a new and different consciousness. I think it is best to think about it as the emergence of something new and more complex, as opposed to the death or end of something. For the first time, evolution will have woken up.
Immortal Life has complied an edited volume of essays, arguments, and debates about Immortalism titled Human Destiny is to Eliminate Death from many esteemed ImmortalLife.info Authors (a good number of whom are also Lifeboat Foundation Advisory Board members as well), such as Martine Rothblatt (Ph.D, MBA, J.D.), Marios Kyriazis (MD, MS.c, MI.Biol, C.Biol.), Maria Konovalenko (M.Sc.), Mike Perry (Ph.D), Dick Pelletier, Khannea Suntzu, David Kekich (Founder & CEO of MaxLife Foundation), Hank Pellissier (Founder of Immortal Life), Eric Schulke & Franco Cortese (the previous Managing Directors of Immortal Life), Gennady Stolyarov II, Jason Xu (Director of Longevity Party China and Longevity Party Taiwan), Teresa Belcher, Joern Pallensen and more. The anthology was edited by Immortal Life Founder & Senior Editor, Hank Pellissier.
This one-of-a-kind collection features ten debates that originated at ImmortalLife.info, plus 36 articles, essays and diatribes by many of IL’s contributors, on topics from nutrition to mind-filing, from teleomeres to “Deathism”, from libertarian life-extending suggestions to religion’s role in RLE to immortalism as a human rights issue.
The book is illustrated with famous paintings on the subject of aging and death, by artists such as Goya, Picasso, Cezanne, Dali, and numerous others.
The book was designed by Wendy Stolyarov; edited by Hank Pellissier; published by the Center for Transhumanity. This edited volume is the first in a series of quarterly anthologies planned by Immortal Life
This Immortal Life Anthology includes essays, articles, rants and debates by and between some of the leading voices in Immortalism, Radical Life-Extension, Superlongevity and Anti-Aging Medicine.
A (Partial) List of the Debaters & Essay Contributors:
Martine Rothblatt Ph.D, MBA, J.D. — inventor of satellite radio, founder of Sirius XM and founder of the Terasem Movement, which promotes technological immortality. Dr. Rothblatt is the author of books on gender freedom (Apartheid of Sex, 1995), genomics (Unzipped Genes, 1997) and xenotransplantation (Your Life or Mine, 2003).
Marios Kyriazis MD, MSc, MIBiol, CBiol. founded the British Longevity Society, was the first to address the free-radical theory of aging in a formal mainstream UK medical journal, has authored dozens of books on life-extension and has discussed indefinite longevity in 700 articles, lectures and media appearances globally.
Maria Konovalenko is a molecular biophysicist and the program coordinator for the Science for Life Extension Foundation. She earned her M.Sc. degree in Molecular Biological Physics at the Moscow Institute of Physics and Technology. She is a co-founder of the International Longevity Alliance.
Jason Xu is the director of Longevity Party China and Longevity Party Taiwan, and he was an intern at SENS.
Mike Perry, PhD. has worked for Alcor since 1989 as Care Services Manager. He has authored or contributed to the automated cooldown and perfusion modeling programs. He is a regular contributor to Alcor newsletters. He has been a member of Alcor since 1984.
David A. Kekich, Founder, President & C.E.O Maximum Life Extension Foundation, works to raise funds for life-extension research. He serves as a Board Member of the American Aging Association, Life Extension Buyers’ Club and Alcor Life Extension Foundation Patient Care Trust Fund. He authored Smart, Strong and Sexy at 100?, a how-to book for extreme life extension.
Eric Schulke is the founder of the Movement for Indefinite Life Extension (MILE). He was a Director, Teams Coordinator and ran Marketing & Outreach at the Immortality Institute, now known as Longecity, for 4 years. He is the Co-Managing Director of Immortal Life.
Hank Pellissier is the Founder & Senior Editor of ImmortaLife.info. Previously, he was the founder/director of Transhumanity.net. Before that, he was Managing Director of the Institute for Ethics and Emerging Technology (ieet.org). He’s written over 120 futurist articles for IEET, Hplusmagazine.com, Transhumanity.net, ImmortalLife.info and the World Future Society.
Franco Cortese is on the Advisory Board for Lifeboat Foundation on their Scientific Advisory Board (Life-Extension Sub-Board) and their Futurism Board. He is the Co-Managing Director alongside of Immortal Life and a Staff Editor for Transhumanity. He has written over 40 futurist articles and essays for H+ Magazine, The Institute for Ethics & Emerging Technologies, Immortal Life, Transhumanity and The Rational Argumentator.
Gennady Stolyarov II is a Staff Editor for Transhumanity, Contributor to Enter Stage Right, Le Quebecois Libre, Rebirth of Reason, Ludwig von Mises Institute, Senior Writer for The Liberal Institute, and Editor-in-Chief of The Rational Argumentator.
Brandon King is Co-Director of the United States Longevity Party.
Khannea Suntzu is a transhumanist and virtual activist, and has been covered in articles in Le Monde, CGW and Forbes.
Teresa Belcher is an author, blogger, Buddhist, consultant for anti-aging, life extension, healthy life style and happiness, and owner of Anti-Aging Insights.
Dick Pelletier is a weekly columnist who writes about future science and technologies for numerous publications.
Joern Pallensen has written articles for Transhumanity and the Institute for Ethics and Emerging Technologies.
CONTENTS:
Editor’s Introduction
DEBATES
1. In The Future, With Immortality, Will There Still Be Children?
2. Will Religions promising “Heaven” just Vanish, when Immortality on Earth is attained?
3. In the Future when Humans are Immortal — what will happen to Marriage?
4. Will Immortality Change Prison Sentences? Will Execution and Life-Behind-Bars be… Too Sadistic?
5. Will Government Funding End Death, or will it be Attained by Private Investment?
6. Will “Meatbag” Bodies ever be Immortal? Is “Cyborgization” the only Logical Path?
7. When Immortality is Attained, will People be More — or Less — Interested in Sex?
8. Should Foes of Immortality be Ridiculed as “Deathists” and “Suicidalists”?
9. What’s the Best Strategy to Achieve Indefinite Life Extension?
ESSAYS
1. Maria Konovalenko:
I am an “Aging Fighter” Because Life is the Main Human Right, Demand, and Desire
2. Mike Perry:
Deconstructing Deathism — Answering Objections to Immortality
3. David A. Kekich:
How Old Are You Now?
4. David A. Kekich:
Live Long… and the World Prospers
5. David A. Kekich:
107,000,000,000 — what does this number signify?
6. Franco Cortese:
Religion vs. Radical Longevity: Belief in Heaven is the Biggest Barrier to Eternal Life?!
7. Dick Pelletier:
Stem Cells and Bioprinters Take Aim at Heart Disease, Cancer, Aging
8. Dick Pelletier:
Nanotech to Eliminate Disease, Old Age; Even Poverty
9. Dick Pelletier:
Indefinite Lifespan Possible in 20 Years, Expert Predicts
10. Dick Pelletier:
End of Aging: Life in a World where People no longer Grow Old and Die
11. Eric Schulke:
We Owe Pursuit of Indefinite Life Extension to Our Ancestors
12. Eric Schulke:
Radical Life Extension and the Spirit at the core of a Human Rights Movement
13. Eric Schulke:
MILE: Guide to the Movement for Indefinite Life Extension
14. Gennady Stolyarov II:
The Real War and Why Inter-Human Wars Are a Distraction
15. Gennady Stolyarov II:
The Breakthrough Prize in Life Sciences — turning the tide for life extension
16. Gennady Stolyarov II:
Six Libertarian Reforms to Accelerate Life Extension
17. Hank Pellissier:
Wake Up, Deathists! — You DO Want to LIVE for 10,000 Years!
18. Hank Pellissier:
Top 12 Towns for a Healthy Long Life
19. Hank Pellissier:
This list of 30 Billionaires — Which One Will End Aging and Death?
20. Hank Pellissier:
People Who Don’t Want to Live Forever are Just “Suicidal”
21. Hank Pellissier:
Eluding the Grim Reaper with 23andMe.com
22. Hank Pellissier:
Sixty Years Old — is my future short and messy, or long and glorious?
23. Jason Xu:
The Unstoppable Longevity Virus
24. Joern Pallensen:
Vegetarians Live Longer, Happier Lives
25. Franco Cortese:
Killing Deathist Cliches: Death to “Death-Gives-Meaning-to-Life”
26. Marios Kyriazis:
Environmental Enrichment — Practical Steps Towards Indefinite Lifespans
27. Khannea Suntzu:
Living Forever — the Biggest Fear in the most Audacious Hope
28. Martine Rothblatt:
What is Techno-Immortality?
29. Teresa Belcher:
Top Ten Anti-Aging Supplements
30. Teresa Belcher:
Keep Your Brain Young! — tips on maintaining healthy cognitive function
31. Teresa Belcher:
Anti-Aging Exercise, Diet, and Lifestyle Tips
32. Teresa Belcher:
How Engineered Stem Cells May Enable Youthful Immortality
33. Teresa Belcher:
Nanomedicine — an Introductory Explanation
34. Rich Lee:
“If Eternal Life is a Medical Possibility, I Will Have It Because I Am A Tech Pirate”
“I zoomed in as she approached the steps of the bridge, taking voyeuristic pleasure in seeing her pixelated cleavage fill the screen.
What was it about those electronic dots that had the power to turn people on? There was nothing real in them, but that never stopped millions of people every day, male and female, from deriving sexual gratification by interacting with those points of light.
Transhumanism is about using technology to improve the human condition. Perhaps a nascent stigma attached to the transhumanist movement in some circles comes from the ethical implications and usage of high technology — bio-tech and nano-tech to name a few, on people. Yet, being transhuman does not necessarily have to be associated with bio-hacking the human body, or entail the donning of cyborg-like prosthetics. Although it is hard not to plainly see and recognize the benefits such human augmentation technology has, for persons in need.
Orgasms and Longevity:
Today, how many normal people, even staunch theists, can claim not to use sexual aids and visual stimulation in the form of video or interaction via video, to achieve sexual satisfaction? It’s hard to deny the therapeutic effect an orgasm has in improving the human condition. In brief, some benefits to health and longevity associated with regular sex and orgasms:
When we orgasm we release hormones, including oxytocin and vasopressin. Oxytocin equals relaxation, and when released it can help us calm down and feel euphoric.
People having more sex add years to their lifespan. Dr. Oz touts a 200 orgasms a year guideline. [1]
While orgasms usually occur as a result of physical sexual activity, there is no conclusive study that proves beneficial orgasms are only produced when sexual activity involves two humans. Erotica in the form of literature and later, moving images, have been used to stimulate the mind into inducing an orgasm for a good many centuries in the absence of a human partner. As technology is the key enabler in stimulating the mind, what might the sexual choices (preferences?) of the human race — the Transhuman be, going forward?
(Gray Scott speaking on Sexbots at 1:19 minutes into the video)
SexBots and Digital Surrogates [Dirrogates]
Sexbots, or sex robots can come in two forms. Fully digital incarnations with AI, viewed through Augmented Reality visors, or as physical robots — advanced enough to pass off as human surrogates. The porn industry has always been at the fore-front of video and interactive innovation, experimenting with means of immersing the audience into the “action”. Gonzo Porn [3] is one such technique that started off as a passive linear viewing experience, then progressed to multi-angle DVD interactivity and now to Virtual Reality first person point-of-view interactivity.
Augmented Reality and Digital Surrogates of porn stars performing with AI built in, will be the next logical step. How could this be accomplished?
Somewhere on hard-drives in Hollywood studios, there are full body digital models and “performance capture” files of actors and actresses. When these perf-cap files are assigned to any suitable 3D CGI model, an animator can bring to life the Digital Surrogate [Dirrogate] of the original actor. Coupled with realistic skin rendering using Separable Subsurface Scattering (SSSS) rendering techniques [4] for instance, and with AI “behaviour” libraries, these Dirrogates can populate the real world, enter living-rooms and change or uplift the mood of person — for the better.
(The above video is for illustration purposes of 3D model data-sets and perf-capture)
With 3D printing of human body parts now possible and blue prints coming online [5] with full mechanical assembly instructions, the other kind of sexbot is possible. It won’t be long before the 3D laser-scanned blueprint of a porn star sexbot will be available for licensing and home printing, at which point, the average person will willingly transition to transhuman status once the ‘buy now’ button has been clicked.
Programmable matter — Claytronics [6] will take this technology to even more sophisticated levels.
Sexbots and Ethics:
If we look at Digital Surrogate Sexbot technology, which is a progression of interactive porn, we can see the technology to create such Dirrogate sexbots exists today, and better iterations will come about in the next couple of years. Augmented Reality hardware when married to wearable technology such as ‘fundawear’ [7] and a photo-realistic Dirrogate driven by perf-captured libraries of porn stars under software (AI) control, can bring endless sessions of sexual pleasure to males and females.
Things get complicated as technology evolves, and to borrow a term from Kurzweil, exponentially. Recently the Kinect 2 was announced. This off the shelf hardware ‘game controller’ in the hands of capable hackers has shown what is possible. It can be used as a full body performance capture solution, a 3D laser scanner that can build a replica of a room in realtime and more…
Which means, during a dirrogate sexbot session where a human wears an Augmented Reality visor such as Meta-glass [8], it would be possible to connect via the internet to your partner, wife or husband and have their live perf-capture session captured by a Kinect 2 controller and drive the photo-realistic Dirrogate of your favorite pornstar.
Would this be the makings of Transhumanist adultry? Some other ethical issues to ponder:
Thou shalt not covet their neighbors wife — But there is no commandant about pirating her perf-capture file.
Will humans, both male or female, prefer sexbots versus human partners for sexual fulfillment? — Will oxytocin release make humans “feel” for their sexbots?
As AI algorithms get better…bordering on artificial sentience, will sexbots start asking for “Dirrogate Rights”?
These are only some of the points worth considering… and if these seem like plausible concerns, imagine what happens in the case of humanoid like physical Sex-bots. As Gray Scott mentions in his video above.
As we evolve into Transhumans, we will find ourselves asking that all important question “What is Real?”
“It will all be down to our perception of reality”. – Memories with Maya
If we approach the subject from a non theist point of view, what we have is a re-boot. A restore of a previously working “system image”. Can we restore a person to the last known working state prior to system failure?
As our Biological (analog) life get’s more entwined with the Digital world we have created, chances are, there might be options worth exploring. It all comes down to “Sampling” — taking snapshots of our analog lives and storing them digitally. Today, with reasonable precision we can sample, store and re-create most of our primary senses, digitally. Sight via cameras, sound via microphones, touch via haptics and even scents can be sampled and/or synthesized with remarkable accuracy.
Life as Routines, Sub-routines and Libraries:
In the story “Memories with Maya”, Krish the AI researcher put forward in simple language, some of his theories to the main character, Dan:
“Humans are creatures of habit,” he said. “We live our lives following the same routine day after day. We do the things we do with one primary motivation–comfort.” “That’s not entirely true,” I said. “What about random acts. Haven’t you done something crazy or on impulse?” “Even randomness is within a set of parameters; thresholds,” he said.
If we look at it, the average person’s week can be broken down to typical activities per day and a branch out for the weekend. The day can be further broken down into time-of-day routines. Essentially, what we have are sub-routines, routines and libraries that are run in an infinite loop, until wear and tear on mechanical parts leads to sector failures. Viruses also thrown into the mix for good measure.
Remember: we are looking at the typical lives of a good section of society — those who have resigned their minds to accepting life as it comes, satisfied in being able to afford creature comforts every once in a while. We aren’t looking at the outliers — the Einsteins, the Jobs the Mozarts. This is ironic, in that, it would be easier to back-up, restore, and resurrect the average person than it would be to do the same for outliers.
Digital Breadcrumbs — The clues we leave behind.
What exactly does social media sites mean by “What’s on your mind?” — Is it an invitation to digitize our Emotions, our thoughts, our experiences via words, pictures, sounds and videos? Every minute, Gigabytes (a conservative estimate) of analog life is being digitized and uploaded to the metaphoric “Cloud” — a rich mineral resource, ripe for data mining by “deeplearning” systems. At some point in the near future, would AI combined with technologies such as Quantum Archeology, Augmented Reality and Nano-tech, allow us to run our brains (minds?) on a substrate independent platform?
If that proposition turns your geek on, here’s some ways that you can live out a modern day version of Hansel and Gretel, insuring you find your way home, by leaving as many digital bread crumbs as you can via:
Mind Files — Terasem and Lifenaut:
What is the LifeNaut Project?
The long-term goal is to test whether given a comprehensive database, saturated with the most relevant aspects of an individual’s personality, future intelligent software will be able to replicate an individual’s consciousness. So, perhaps in the next 20 or 30 years technology will be developed to upload these files, together with futuristic software into a body of some sort – perhaps cellular, perhaps holographic, perhaps robotic. LifeNaut.com is funded by the Terasem Movement Foundation, Inc.
The LifeNaut Project is organized as a research experiment designed to test these hypotheses:
(1) a conscious analog of a person may be created by combining sufficiently detailed data about the person (“mindfile & biofile”) using future consciousness software (“mindware”), and
(2) such a conscious analog may be downloaded into a biological or nanotechnological body to provide life experiences comparable to those of a typically birthed human.
Read about Voice Banking, Speech Reconstruction and how natural human voice can be preserved and re-constructed. Voice banking might help even in cases when there is no BSOD scenario involved.
Roger Ebert, noted film critic got his “natural” voice back, using such technology.
Without us even knowing it, we are Transhumans at heart. Owners of the gaming console Xbox and the Kinect, have at their disposal, hardware that until just a couple of years ago, was only within reach of large corporations and Hollywood studios. Motion Capture, Laser scanning, full body 3D models and performance capture was not accessible to lay-people.
Today, this technology can contribute toward backup and Digital resurrection. A performance capture session can encode digitally, the essence of a persons gait, the way they walk, pout, and express themselves — A person’s unique Digital Signature. The next video shows this.
“It was easy to create a frame for him, Dan,” he said. “In the time that the cancer was eating away at him, the day’s routine became more predictable.
At first he would still go to work, then come home and spend time with us. Then he couldn’t go anymore and he was at home all day.
I knew his routine so well it took me 15 minutes to feed it in. There was no need for any random branches.”
A performance capture file, could also be stored as part of a MindFile. LifeNaut and other cryonic service providers could benefit from such invaluable data when re-booting a person.
“And sometimes when we touch”:
Perhaps one of the most difficult of our senses to recreate, is that of touch. Science is already making giant strides in this area, and looking at it from a more human perspective, touch is one of the more direct and cherished sensations that defines humanity. Touch can convey emotion.
…That’s the point of this kind of technology – giving people their humanity back. You could argue that a person is no less of a human after losing a limb, but those who suffer through it would likely tell you that there is a feeling of loss. Getting that back may be physically gratifying, but it’s probably even more psychologically gratifying… — Nigel Ackland- on his bebionic arm.
If a person’s unique “touch” signature can be digitized, every nuance can be forever preserved…both for the benefit of the owner of the file, and to their loved ones, experiencing and remembering shared intimate moments.
In this essay I argue that technologies and techniques used and developed in the fields of Synthetic Ion Channels and Ion Channel Reconstitution, which have emerged from the fields of supramolecular chemistry and bio-organic chemistry throughout the past 4 decades, can be applied towards the purpose of gradual cellular (and particularly neuronal) replacement to create a new interdisciplinary field that applies such techniques and technologies towards the goal of the indefinite functional restoration of cellular mechanisms and systems, as opposed to their current proposed use of aiding in the elucidation of cellular mechanisms and their underlying principles, and as biosensors.
In earlier essays (see here and here) I identified approaches to the synthesis of non-biological functional equivalents of neuronal components (i.e. ion-channels ion-pumps and membrane sections) and their sectional integration with the existing biological neuron — a sort of “physical” emulation if you will. It has only recently come to my attention that there is an existing field emerging from supramolecular and bio-organic chemistry centered around the design, synthesis, and incorporation/integration of both synthetic/artificial ion channels and artificial bilipid membranes (i.e. lipid bilayer). The potential uses for such channels commonly listed in the literature have nothing to do with life-extension however, and the field is to my knowledge yet to envision the use of replacing our existing neuronal components as they degrade (or before they are able to), rather seeing such uses as aiding in the elucidation of cellular operations and mechanisms and as biosensors. I argue here that the very technologies and techniques that constitute the field (Synthetic Ion-Channels & Ion-Channel/Membrane Reconstitution) can be used towards the purpose of the indefinite-longevity and life-extension through the iterative replacement of cellular constituents (particularly the components comprising our neurons – ion-channels, ion-pumps, sections of bi-lipid membrane, etc.) so as to negate the molecular degradation they would have otherwise eventually undergone.
While I envisioned an electro-mechanical-systems approach in my earlier essays, the field of Synthetic Ion-Channels from the start in the early 70’s applied a molecular approach to the problem of designing molecular systems that produce certain functions according to their chemical composition or structure. Note that this approach corresponds to (or can be categorized under) the passive-physicalist sub-approach of the physicalist-functionalist approach (the broad approach overlying all varieties of physically-embodied, “prosthetic” neuronal functional replication) identified in an earlier essay.
The field of synthetic ion channels is also referred to as ion-channel reconstitution, which designates “the solubilization of the membrane, the isolation of the channel protein from the other membrane constituents and the reintroduction of that protein into some form of artificial membrane system that facilitates the measurement of channel function,” and more broadly denotes “the [general] study of ion channel function and can be used to describe the incorporation of intact membrane vesicles, including the protein of interest, into artificial membrane systems that allow the properties of the channel to be investigated” [1]. The field has been active since the 1970s, with experimental successes in the incorporation of functioning synthetic ion channels into biological bilipid membranes and artificial membranes dissimilar in molecular composition and structure to biological analogues underlying supramolecular interactions, ion selectivity and permeability throughout the 1980’s, 1990’s and 2000’s. The relevant literature suggests that their proposed use has thus far been limited to the elucidation of ion-channel function and operation, the investigation of their functional and biophysical properties, and in lesser degree for the purpose of “in-vitro sensing devices to detect the presence of physiologically-active substances including antiseptics, antibiotics, neurotransmitters, and others” through the “… transduction of bioelectrical and biochemical events into measurable electrical signals” [2].
Thus my proposal of gradually integrating artificial ion-channels and/or artificial membrane sections for the purpse of indefinite longevity (that is, their use in replacing existing biological neurons towards the aim of gradual substrate replacement, or indeed even in the alternative use of constructing artificial neurons to, rather than replace existing biological neurons, become integrated with existing biological neural networks towards the aim of intelligence amplification and augmentation while assuming functional and experiential continuity with our existing biological nervous system) appears to be novel, while the notion of artificial ion-channels and neuronal membrane systems ion general had already been conceived (and successfully created/experimentally-verified, though presumably not integrated in-vivo).
The field of Functionally-Restorative Medicine (and the orphan sub-field of whole-brain-gradual-substrate-replacement, or “physically-embodied” brain-emulation if you like) can take advantage of the decades of experimental progress in this field, incorporating both the technological and methodological infrastructures used in and underlying the field of Ion-Channel Reconstitution and Synthetic/Artificial Ion Channels & Membrane-Systems (and the technologies and methodologies underlying their corresponding experimental-verification and incorporation techniques) for the purpose of indefinite functional restoration via the gradual and iterative replacement of neuronal components (including sections of bilipid membrane, ion channels and ion pumps) by MEMS (micro-electrocal-mechanical-systems) or more likely NEMS (nano-electro-mechanical systems).
The technological and methodological infrastructure underlying this field can be utilized for both the creation of artificial neurons and for the artificial synthesis of normative biological neurons. Much work in the field required artificially synthesizing cellular components (e.g. bilipid membranes) with structural and functional properties as similar to normative biological cells as possible, so that the alternative designs (i.e. dissimilar to the normal structural and functional modalities of biological cells or cellular components) and how they affect and elucidate cellular properties, could be effectively tested. The iterative replacement of either single neurons, or the sectional replacement of neurons with synthesized cellular components (including sections of the bi-lipid membrane, voltage-dependent ion-channels, ligand-dependent ion channels, ion pumps, etc.) is made possible by the large body of work already done in the field. Consequently the technological, methodological and experimental infrastructures developed for the fields of Synthetic
Ion-Channels and Ion-Channel/Artificial-Membrane-Reconstitution can be utilized for the purpose of a.) iterative replacement and cellular upkeep via biological analogues (or not differing significantly in structure or functional & operational modality to their normal biological counterparts) and/or b.) iterative replacement with non-biological analogues of alternate structural and/or functional modalities.
Rather than sensing when a given component degrades and then replacing it with an artificially-synthesized biological or non-biological analogue, it appears to be much more efficient to determine the projected time it takes for a given component to degrade or otherwise lose functionality, and simply automate the iterative replacement in this fashion, without providing in-vivo systems for detecting molecular or structural degradation. This would allow us to achieve both experimental and pragmatic success in such cellular-prosthesis sooner, because it doesn’t rely on the complex technological and methodological infrastructure underlying in-vivo sensing, especially on the scale of single neuron components like ion-channels, and without causing operational or functional distortion to the components being sensed.
A survey of progress in the field [3] lists several broad design motifs. I will first list the deign motifs falling within the scope of the survey, and the examples it provides. Selections from both papers are meant to show the depth and breadth of the field, rather than to elucidate the specific chemical or kinetic operations under the purview of each design-variety.
For a much more comprehensive, interactive bibliography of papers falling within the field of Synthetic Ion-Channels or constituting the historical foundations of the field, see Jon Chui’s online biography here, which charts the developments in this field up until 2011.
First Survey
Unimolecular ion channels:
Examples include a.) synthetic ion channels with oligocrown ionophores, [5] b.) using a-helical peptide scaffolds and rigid push–pull p-octiphenyl scaffolds for the recognition of polarized membranes, [6] and c.) modified varieties of the b-helical scaffold of gramicidin A [7]
Barrel-stave supramolecules:
Examples of this general class falling include avoltage-gated synthetic ion channels formed by macrocyclic bolaamphiphiles and rigidrod p-octiphenyl polyols [8].
Macrocyclic, branched and linear non-peptide bolaamphiphiles as staves:
Examples of this sub-class include synthetic ion channels formed by a.) macrocyclic, branched and linear bolaamphiphiles and dimeric steroids, [9] and by b.) non-peptide macrocycles, acyclic analogs and peptide macrocycles [respectively] containing abiotic amino acids [10].
Dimeric steroid staves:
Examples of this sub-class include channels using polydroxylated norcholentriol dimer [11].
pOligophenyls as staves in rigid rod b barrels:
Examples of this sub-class include “cylindrical self-assembly of rigid-rod b-barrel pores preorganized by the nonplanarity of p-octiphenyl staves in octapeptide-p-octiphenyl monomers” [12].
Synthetic Polymers:
Examples of this sub-class include synthetic ion channels and pores comprised of a.) polyalanine, b.) polyisocyanates, c.) polyacrylates, [13] formed by i.) ionophoric, ii.) ‘smart’ and iii.) cationic polymers [14]; d.) surface-attached poly(vinyl-n-alkylpyridinium) [15]; e.) cationic oligo-polymers [16] and f.) poly(m-phenylene ethylenes) [17].
Helical b-peptides (used as staves in barrel-stave method):
Examples of this class include: a.) cationic b-peptides with antibiotic activity, presumably acting as amphiphilic helices that form micellar pores in anionic bilayer membranes [18].
Monomeric steroids:
Examples of this sub-class falling include synthetic carriers, channels and pores formed by monomeric steroids [19], synthetic cationic steroid antibiotics [that] may act by forming micellar pores in anionic membranes [20], neutral steroids as anion carriers [21] and supramolecular ion channels [22].
Complex minimalist systems:
Examples of this sub-class falling within the scope of this survey include ‘minimalist’ amphiphiles as synthetic ion channels and pores [23], membrane-active ‘smart’ double-chain amphiphiles, expected to form ‘micellar pores’ or self-assemble into ion channels in response to acid or light [24], and double-chain amphiphiles that may form ‘micellar pores’ at the boundary between photopolymerized and host bilayer domains and representative peptide conjugates that may self assemble into supramolecular pores or exhibit antibiotic activity [25].
Non-peptide macrocycles as hoops:
Examples of this sub-class falling within the scope of this survey include synthetic ion channels formed by non-peptide macrocycles acyclic analogs [26] and peptide macrocycles containing abiotic amino acids [27].
Peptide macrocycles as hoops and staves:
Examples of this sub-class include a.) synthetic ion channels formed by self-assembly of macrocyclic peptides into genuine barrel-hoop motifs that mimic the b-helix of gramicidin A with cyclic b-sheets. The macrocycles are designed to bind on top of channels and cationic antibiotics (and several analogs) are proposed to form micellar pores in anionic membranes [28]; b.) synthetic carriers, antibiotics (and analogs) and pores (and analogs) formed by macrocyclic peptides with non-natural subunits. [Certain] macrocycles may act as b-sheets, possibly as staves of b-barrel-like pores [29]; c.) bioengineered pores as sensors. Covalent capturing and fragmentations [have been] observed on the single-molecule level within engineered a-hemolysin pore containing an internal reactive thiol [30].
Summary
Thus even without knowledge of supramolecular or organic chemistry, one can see that a variety of alternate approaches to the creation of synthetic ion channels, and several sub-approaches within each larger ‘design motif’ or broad-approach, not only exist but have been experimentally verified, varietized and refined.
Second Survey
The following selections [31] illustrate the chemical, structural and functional varieties of synthetic ions categorized according to whether they are cation-conducting or anion-conducting, respectively. These examples are used to further emphasize the extent of the field, and the number of alternative approaches to synthetic ion-channel design, implementation, integration and experimental-verification already existent. Permission to use all the following selections and figures were obtained from the author of the source.
There are 6 classical design-motifs for synthetic ion-channels, categorized by structure, that are identified within the paper:
“The first non-peptidic artificial ion channel was reported by Kobuke et al. in 1992” [33].
“The channel contained “an amphiphilic ion pair consisting of oligoether-carboxylates and mono- (or di-) octadecylammoniumcations. The carboxylates formed the channel core and the cations formed the hydrophobic outer wall, which was embedded in the bilipid membrane with a channel length of about 24 to 30 Å. The resultant ion channel, formed from molecular self-assembly, is cation selective and voltage-dependent” [34].
“Later, Kokube et al. synthesized another channel comprising of resorcinol based cyclic tetramer as the building block. The resorcin-[4]-arenemonomer consisted of four long alkyl chains which aggregated to forma dimeric supramolecular structure resembling that of Gramicidin A” [35]. “Gokel et al. had studied [a set of] simple yet fully functional ion channels known as “hydraphiles” [39].
“An example (channel 3) is shown in Figure 1.6, consisting of diaza-18-crown-6 crown ether groups and alkyl chain as side arms and spacers. Channel 3 is capable of transporting protons across the bilayer membrane” [40].
“A covalently bonded macrotetracycle4 (Figure 1.8) had shown to be about three times more active than Gokel’s ‘hydraphile’ channel, and its amide-containing analogue also showed enhanced activity” [44].
“Inorganic derivative using crown ethers have also been synthesized. Hall et. al synthesized an ion channel consisting of a ferrocene and 4 diaza-18-crown-6 linked by 2 dodecyl chains (Figure 1.9). The ion channel was redox-active as oxidation of the ferrocene caused the compound to switch to an inactive form” [45]
B STAVES:
“These are more difficult to synthesize [in comparison to unimolecular varieties] because the channel formation usually involves self-assembly via non-covalent interactions” [47].“A cyclic peptide composed of even number of alternating D- and L-amino acids (Figure 1.10) was suggested to form barrel-hoop structure through backbone-backbone hydrogen bonds by De Santis” [49].
“A tubular nanotube synthesized by Ghadiri et al. consisting of cyclic D and L peptide subunits form a flat, ring-shaped conformation that stack through an extensive anti-parallel β-sheet-like hydrogen bonding interaction (Figure 1.11)” [51].
“Experimental results have shown that the channel can transport sodium and potassium ions. The channel can also be constructed by the use of direct covalent bonding between the sheets so as to increase the thermodynamic and kinetic stability” [52].
“By attaching peptides to the octiphenyl scaffold, a β-barrel can be formed via self-assembly through the formation of β-sheet structures between the peptide chains (Figure 1.13)” [53].
“The same scaffold was used by Matile etal. to mimic the structure of macrolide antibiotic amphotericin B. The channel synthesized was shown to transport cations across the membrane” [54].
“Attaching the electron-poor naphthalenediimide (NDIs) to the same octiphenyl scaffold led to the hoop-stave mismatch during self-assembly that results in a twisted and closed channel conformation (Figure 1.14). Adding the compleentary dialkoxynaphthalene (DAN) donor led to the cooperative interactions between NDI and DAN that favors the formation of barrel-stave ion channel.” [57].
MICELLAR
“These aggregate channels are formed by amphotericin involving both sterols and antibiotics arranged in two half-channel sections within the membrane” [58].
“An active form of the compound is the bolaamphiphiles (two-headed amphiphiles). (Figure 1.15) shows an example that forms an active channel structure through dimerization or trimerization within the bilayer membrane. Electrochemical studies had shown that the monomer is inactive and the active form involves dimer or larger aggregates” [60].
ANION CONDUCTING CHANNELS:
“A highly active, anion selective, monomeric cyclodextrin-based ion channel was designed by Madhavan et al (Figure 1.16). Oligoether chains were attached to the primary face of the β-cyclodextrin head group via amide bonds. The hydrophobic oligoether chains were chosen because they are long enough to span the entire lipid bilayer. The channel was able to select “anions over cations” and “discriminate among halide anions in the order I-> Br-> Cl- (following Hofmeister series)” [61].
“The anion selectivity occurred via the ring of ammonium cations being positioned just beside the cyclodextrin head group, which helped to facilitate anion selectivity. Iodide ions were transported the fastest because the activation barrier to enter the hydrophobic channel core is lower for I- compared to either Br- or Cl-“ [62]. “A more specific artificial anion selective ion channel was the chloride selective ion channel synthesized by Gokel. The building block involved a heptapeptide with Proline incorporated (Figure 1.17)” [63].
Cellular Prosthesis: Inklings of a New Interdisciplinary Approach
The paper cites “nanoreactors for catalysis and chemical or biological sensors” and “interdisciplinary uses as nano –filtration membrane, drug or gene delivery vehicles/transporters as well as channel-based antibiotics that may kill bacterial cells preferentially over mammalian cells” as some of the main applications of synthetic ion-channels [65], other than their normative use in elucidating cellular function and operation.
However, I argue that a whole interdisciplinary field and heretofore-unrecognized new approach or sub-field of Functionally-Restorative Medicine is possible through taking the technologies and techniques involved in in constructing, integrating, and experimentally-verifying either a.) non-biological analogues of ion-channels & ion-pumps (thus trans-membrane membrane proteins in general, also sometimes referred to as transport proteins or integral membrane proteins) and membranes (which include normative bilipid membranes, non-lipid membranes and chemically-augmented bilipid membranes), and b.) the artificial synthesis of biological analogues of ion-channels, ion-pumps and membranes, which are structurally and chemically equivalent to naturally-occurring biological components but which are synthesized artificially – and applying such technologies and techniques toward the purpose the gradual replacement of our existing biological neurons constituting our nervous systems – or at least those neuron-populations that comprise the neo- and prefrontal-cortex, and through iterative procedures of gradual replacement thereby achieving indefinite-longevity. There is still work to be done in determining the comparative advantages and disadvantages of various structural and functional (i.e. design) motifs, and in the logistics of implanting the iterative replacement or reconstitution of ion-channels, ion-pumps and sections of neuronal membrane in-vivo.
The conceptual schemes outlined in Concepts for Functional Replication of Biological Neurons [66], Gradual Neuron Replacement for the Preservation of Subjective-Continuity [67] and Wireless Synapses, Artificial Plasticity, and Neuromodulation [68] would constitute variations on the basic approach underlying this proposed, embryonic interdisciplinary field. Certain approaches within the fields of nanomedicine itself, particularly those approaches that constitute the functional emulation of existing cell-types, such as but not limited to Robert Freitas’s conceptual designs for the functional emulation of the red blood cell (a.k.a. erythrocytes, haematids) [69], i.e. the Resperocyte, itself should be seen as falling under the purview of this new approach, although not all approaches to Nanomedicine (diagnostics, drug-delivery and neuroelectronic interfacing) constitute the physical (i.e. electromechanical, kinetic and/or molecular physically-embodied) and functional emulation of biological cells.
The field of functionally-restorative medicine in general (and of nanomedicine in particular) and the field of supramolecular and organic chemistry converge here, where these technological, methodological, and experimental infrastructures developed in field of Synthetic Ion-Channels and Ion Channel Reconstitution can be employed to develop a new interdisciplinary approach that applies the logic of prosthesis to the cellular and cellular-component (i.e. sub-cellular) scale; same tools, new use. These techniques could be used to iteratively replace the components of our neurons as they degrade, or to replace them with more robust systems that are less susceptible to molecular degradation. Instead of repairing the cellular DNA, RNA and protein transcription and synthesis machinery, we bypass it completely by configuring and integrating the neuronal components (ion-channels, ion-pumps and sections of bilipid membrane) directly.
Thus I suggest that theoreticians of nanomedicine look to the large quantity of literature already developed in the emerging fields of synthetic ion-channels and membrane-reconstitution, towards the objective of adapting and applying existing technologies and methodologies to the new purpose of iterative maintenance, upkeep and/or replacement of cellular (and particularly neuronal) constituents with either non-biological analogues or artificially-synthesized-but-chemically/structurally-equivalent biological analogues.
This new sub-field of Synthetic Biology needs a name to differentiate it from the other approaches to Functionally-Restorative Medicine. I suggest the designation ‘cellular prosthesis’.
References:
[1] Williams (1994)., An introduction to the methods available for ion channel reconstitution. in D.C Ogden Microelectrode techniques, The Plymouth workshop edition, CambridgeCompany of Biologists.
[2] Tomich, J., Montal, M. (1996). U.S Patent No. 5,16,890. Washington, DC: U.S. Patent and Trademark Office.
[3] Matile, S., Som, A., & Sorde, N. (2004). Recent synthetic ion channels and pores. Tetrahedron, 60(31), 6405-6435. ISSN 0040-4020, 10.1016/j.tet.2004.05.052. Access: http://www.sciencedirect.com/science/article/pii/S0040402004007690:
[4] XIAO, F., (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.
[5] Ibid., p. 6411.
[6] Ibid., p. 6416.
[7] Ibid., p. 6413.
[8] Ibid., p. 6412.
[9] Ibid., p. 6414.
[10] Ibid., p. 6425.
[11] Ibid., p. 6427.
[12] Ibid., p. 6416.
[13] Ibid., p. 6419.
[14] Ibid., p. 6419.
[15] Ibid., p. 6419.
[16] Ibid., p. 6419.
[17] Ibid., p. 6419.
[18] Ibid., p. 6421.
[19] Ibid., p. 6422.
[20] Ibid., p. 6422.
[21] Ibid., p. 6422.
[22] Ibid., p. 6422.
[23] Ibid., p. 6423.
[24] Ibid., p. 6423.
[25] Ibid., p. 6423.
[26] Ibid., p. 6426.
[27] Ibid., p. 6426.
[28] Ibid., p. 6427.
[29] Ibid., p. 6327.
[30] Ibid., p. 6427.
[31] XIAO, F. (2009). Synthesis and structural investigations of pyridine-based aromatic foldamers.
[69] Freitas Jr., R., (1998). “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell”. Artificial Cells, Blood Substitutes, and Immobil. Biotech. (26): 411–430. Access: http://www.ncbi.nlm.nih.gov/pubmed/9663339
‘Let there be light,’ said the Cgi-God, and there was light…and God Rays.
We were out in the desert; barren land, and our wish was that it be transformed into a green oasis; a tropical paradise.
And so our demigods went to work in their digital sand-boxes. Then, one of the Cgi-Gods populated the land with Dirrogates –Digital people in her own likeness.
Welcome to the world… created in Real-time.
A whole generation of people are growing up in such virtual worlds, accustomed to travelling across miles and miles of photo-realistic terrain on their gaming rigs. An entire generation of Transhumans evolving (perhaps even un-known to them). With each passing year, hardware and software under the command of human intelligence, gets even closer to simulating the real-world, down to physics, caustics and other phenomena exclusive to the planet Earth. How is all this voodoo being done?
Enter –the Game Engine.
All output in the video above is in real-time and from a single modern gaming PC. That’s right…in case you missed it, all of the visuals were generated in real-time from a single PC that can sit on a desk. The “engine” behind it, is the CryEngine 3. A far more customized and amped up version of this technology called Cinebox is a dedicated offering aimed at Cinematography. It will have tools and functions that film makers are familiar with. It is these advances in technology… these tools that film-makers will use, that will acclimatize us to the virtual world they build with human performance capture and digital assets; laser scanned pointclouds of real-world architecture… this is the technology that will play its part and segue us into Transhumanism, rather than a radical crusade that will “convert” humanity to the movement.
Mind Uploads need a World to roam in:
Laser scanned buildings and even whole neighborhood blocks are now common place in large budget Hollywood productions. A detailed point cloud needs massive compute power to render. Highend Game Engines when daisy chained can render and simulate these large neighborhoods with realtime animated atmosphere, and populate the land with photo-realistic flora and fauna. Lest we forget… in stereoscopic 3D, for full immersion of our visual cortex.
Real World Synced Weather:
Game Engines have powerful and advanced TOD (time of day) editors. Now imagine if a TOD editor module and a weather system could pull data such as wind direction, temperature and weather conditions from real-world sensors, or a real-time data source.
If this could be done, then the augmented world running on the Game Engine could have details such as leaves blowing in the correct direction. See the video above at around the 0.42 seconds mark for a feeler of what I’m aiming for.
Also: The stars would all align and there would be no possible errors in the night sky, of the virtual with the real, though there would be nothing stopping “God” from introducing a blue moon in the sky.
At around the 0:20 second mark, the video above shows one of the “Demi-Gods” at work: populating Virtual Earth with exotic trees and forests… mind-candy to keep an uploaded mind from home-sickness. As Transhumans, either as full mind uploads or as augmented humans with bio-mechanical enhancements or indeed, even as naturals, it is expected that we will augment the real world with our dreams of a tropical paradise — Heaven, can indeed be a place on Earth.
Epilogue:
We were tired of our mundane lives in an un-augmented biosphere. As Transhumans, some of us booted up our mind-uploads while yet others ventured out into the desert of the real world in temperature regulated nano-clothing, experiencing a tropical paradise… even as the “naturals” would deny it’s very existence.
Recently, scientists have said we may really be living in a simulation after all. The Mayans stopped counting time not because they predicted Winter Solstice 2012 would be the end of the world… but it might be because they saw 2013 heralding the dawn of a new era. An era that sees the building blocks come into place for a journey heading into eventual…‘Singularity’
Dir·ro·gate : A portmanteau of Digital + Surrogate. Borrowed from the novel “Memories with Maya“ Authors note: All images, videos and products mentioned are copyright to their respective owners and brands and there is no implied connection between the brands and Transhumanism.
Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec[2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.
Loaded Uploads:
Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall[3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.
The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.
Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.
The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.
Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.
Mind Uploading is (Largely) Independent of Software Performance:
Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.
This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.
If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.
Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.
Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.
If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.
The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.
This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design
So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?
There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.
If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.
Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.
This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.
Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.
However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.
Could Upload+AGI be easier to implement than AGI alone?
This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”
If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.
Virtual Advantage:
The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.
There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.
Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.
These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.
But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.
It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.
Increasing the Imminence of an Intelligent Explosion:
So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).
He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.
Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.
Intimations of Implications:
So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.
People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.
Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.
On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.
I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.
Conclusion:
Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:
How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!
References:
[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.
[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].
[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: http://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].
[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]
If the picture header above influenced you to click to read more of this article, then it establishes at least part of my hypothesis: Visual stimuli that trigger our primal urges, supersede all our senses, even over-riding intellect. By that I mean, irrespective of IQ level, the visual alone and not the title of the essay will have prompted a click through –Classic advertising tactic: Sex sells.
Yet, could there be a clue in this behavior to study further, in our quest for Longevity? Before Transhumanism life extension technology such as nano-tech and bio-tech go mainstream… we need to keep our un-amped bodies in a state of constant excitement, using visual triggers that generate positive emotions, thereby hopefully, keeping us around long enough to take advantage of these bio-hacks when they become available.
Emotions on Demand — The “TiVo-ing” of feelings:
From the graphic above, it is easy to extrapolate that ‘positive’ emotions can contribute significantly to Longevity. When we go on a vacation, we’re experiencing the world in a relaxed frame of mind and encoding these experiences, even if sub-consciously, in our brains (minds?). Days, or even years later we can call on these experiences, on-demand, to bring us comfort.
Granted, much like analog recordings… over time, these stored copies of positive emotions will deteriorate, and just as we can today digitize images and sounds, making for pristine everlasting copies… can we digitize Emotions for recall and to experience them on-demand?
How would we go about doing it and what purpose does it serve?
Digitizing Touch: Your Dirrogate’s unique Emotional Signature:
Can we digitize Touch; a crucial building block that contributes to the creation of Emotions? For an answer, we need to look to the (and to some, the questionable) technology behind Teledildonics.
While the tech to experience haptic feed-back has been around for a while, it’s been mostly confined to Virtual Reality simulations and for training purposes. Crude haptic-force feedback gaming controllers are available on the market, but advances in actuators, and nano-scale miniaturization are soon to change that, even going as far as to give us tactile imaging capability — “Smart Skin”
Recently, Durex announced “Fundawear”. It’s purpose? To experience the “touch” of your partner in a fun light-hearted way. Yet, what if a Fundawear session could be recorded and played back later? The unique way your partner touches, forever digitized for playback when desired… allowing you to experience the emotion of joy and happiness at will?
Fundawear can be thought of as a beta v1.0 of something akin to smart-skin in reverse, which could eventually allow a complete “feel-stream” to be digitized and played back on-demand.
Currently we are already able to digitize some faculties that stimulate two of our primary senses:
Sight — via a video camera.
Sound — via microphones.
So how do we go about digitizing and re-creating the sense of Touch?
Solutions such as the one from NuiCapture shown in the video above, in combination with off the shelf game hardware such as the Kinect, can Digitize a whole body “performance” — Also known as performance capture.
Dirrogates and 3D Printing a Person:
In the near future if we get blue-prints to 3D print a person, ready for re-animation and complete with “smart-skin”… such a 3D printed surrogate could reciprocate our touch.
It would be an exercise in imagination, to envision 3D printing your partner, if they couldn’t be with you when you wanted them, or indeed it could raise moral and ethical issues such as ‘adultery’ if an un-authorized 3D printed copy was produced of a person, and their “signature” performance files was pirated.
But with every evil, there is also the good. 3D printers can print guns, or as seen in the video above: a prosthetic hand, allowing a child to experience life the way other children do — That is the ethos of Transhumanism.
Loneliness can kill you:
Well maybe not exactly kill you, but it can negatively impact your health, says The World of Psychology. That would be counterproductive in our quest for Longevity.
A few years ago, companies such as Accenture introduced family collaboration projects. I recommend clicking on the link to read the article, as copyright restrictions prevent including it in this essay. In essence, it allows older relatives to derive emotional comfort from seeing and interacting with their families living miles away.
At a very basic level, we are already Transhuman. No stigma involved… no religious boundaries crossed. This ethical use of technology, can bring comfort to an aging section of society, bettering their condition.
In a relationship, the loss of a loved one can be devastating to the surviving partner, even more so, if the couple had grown old together and shared their good and bad times. Experiencing and re-living memories that transcend photographs and videos, could contribute towards generating positive emotions and thus longevity in the person coping with his/her loss.
While 3D printing and re-animating a person is still a few years away, there is another stop-gap technology: Augmented Reality. With AR visors, we can see and interact with a “Dirrogate” (Digital Surrogate) of another person as though they were in the same room with us. The person’s Dirrogate can be operated in real-time by another person living thousands of miles away… or a digitized touch stream can be called on… long after the human operator is no more.
In the story: “Memories with Maya”, the context and it’s repercussion on our evolution into a Transhuman species, is explored in more detail.
The purpose of this essay is to seed ideas only, and is not to be taken as expert advice.